首页 > 大学本科> 文学
题目内容 (请给出正确答案)
[主观题]

The pipelining approach includes pipelined instruction execution.

查看答案
答案
收藏
如果结果不匹配,请 联系老师 获取答案
您可能会需要:
您的账号:,可能还需要:
您的账号:
发送账号密码至手机
发送
安装优题宝APP,拍照搜题省时又省心!
更多“The pipelining approach includ…”相关的问题
第1题
Parallelism appears in the following forms except ______. A.lookahead, pipelining, data parallelism

Parallelism appears in the following forms except ______.

A.lookahead, pipelining, data parallelism

B.vectorization, concurrency, partitioning

C.simultaneity, overlapping, replication

D.multitasking, miniturization, digitalisation

点击查看答案
第2题
回答题。 The Importance of Good WritingLike fine food, good writing is something we appro

回答题。

The Importance of Good Writing

Like fine food, good writing is something we approach with pleasure and enjoy from the first taste to the last. 46 Quite the contrary, just as the cook has to undergo an intensive training, mastering the skills of his trade, the writer must sit at his desk and devote long hours to achieving a style. in his writing, whatever its purpose --school work, matters of business, or purely social communication. 47 There are still some remote places in the world where you might find someone to do your business or social writing for you, for a fee. There are a few mangers who are lucky enough to have the service of that rare kind of secretary who can take care of all sorts of letter writing with no more than a quick note to work from. 48 We have to write school papers, business papers or home papers. We are constantly called on to put words to paper. It would be difficult to count the number of such words, messages, letters, and reports put to the mails or delivered by hand, but the daily figure must be enormous. 49 We want to arouse and hold the interest of readers. We want whatever we write to be read, from first word to last, not thrown into some "letters-to-be-read" file or into a wastepaper basket. 50

A. But for most of us, if there is any writing to be done, we have to do it ourselves.

B. However, the managers may sometimes cause the writers a lot of trouble.

C. Any good writers, like good cooks, do not suddenly appear full-blown (成熟的 ) .

D. What is more, everyone who writes expects, or at least hopes, that his writing will be read.

E. This is the reason we bend our efforts toward learning and practicing the skills of interesting, effective writing.

F. You may be sure that the greater the effort, the more effective the writing, and the more rewarding.

请回答(46)__________ 查看材料

A.A

B.B

C.C

D.D

E.E

F.F

点击查看答案
第3题
Evolution of Computer Architecture 计算机体系的演变 The study of computer architecture involves bo

Evolution of Computer Architecture

计算机体系的演变

The study of computer architecture involves both hardware organization and programming/software requirements. As seen by an assembly language programmer, computer architecture is abstracted by its instruction set, which includes operation codes (opcode for short), addressing modes, registers, virtual memory, etc.

Evolution of Computer Architecture  计算机体系的演变  The

Legends:

I/E: Instruction Fetch and Execute

SIMD: Single Instruction Streams and Multiple Data Streams

MIMD: Multiple Instruction Streams and Multiple Data Streams Figure 1Tree Showing Architectural Evolution from Sequential Scalar Computers to Vector Processors and Parallel Computers

From the hardware implementation point of view, the abstract machine is organized with CPUs, caches, buses, microcodes, pipelines, physical memory, etc. Therefore, the study of architecture covers both instruction-set architectures and machine implementation organizations.

Over the past four decades, computer architecture has gone through evolutional rather than revolutional changes. Sustaining features are those that were proven performance deliverers, we started with the Von Neumann architecture[1]built as a sequential machine executing scalar data. The sequential computer was improved from bit-serial to word- parallel operations, and from fixed-point to floating-point operations. The Von Neumann architecture is slow due to sequential execution of instructions in programs.

Lookahead, Parallelism and Pipelining[2]

Lookahead techniques were introduced to prefetch instructions in order to overlap I/E (instruction fetch/decode and execution)[3]operations and to enable functiorial parallelism. Functional parallelism was supported by two approaches: One is to use multiple functional units simultaneously, and the other is to practice pipelining at various processing levels.

The latter includes pipelined instruction execution, pipelined arithmetic computations, and memory-access operations. Pipelining has proven especially attractive in performing identical operations repeatedly over vector data strings. Vector operations were originally carried out implicitly by software-controlled looping using scalar pipeline processors.

Flynn's Classification[4]

Flynn introduced a classification of various computer architectures based on notions of instruction and data streams in 1972. Conventional sequential machines are called SISD (single instruction stream over a single data stream)[5]computers. Vector computers are equipped with scalar and vector hardware or appear as SIMD (single instruction stream over multiple data streams)[6]machines. Parallel computers are reserved for MIMD (multiple Instruction streams over multiple data streams)[7]machines.

An MISD (multiple instruction streams and a single data steam)[8]machines are modeled. The same data stream flows through a linear array of processors executing different instruction streams. This architecture is also known as systolic arrays for pipelined execution of specific algorithms.

Of the four machine models, most parallel computers built in the past assumed the MIMD model for general-purpose computations. The SIMD and MISD models are more suitable for special-purpose computations. For this reason, MIMD is the most popular model, SIMD next, and MISD the least popular model being applied in commercial machines.

Parallel Computers

Intrinsic parallel computers are those that execute programs in MIMD mode. There are two major classes of parallel computers, namely, shared-memory multiprocessors and message-passing multicomputers. The major distinction between multiprocessors and multicomputers lies in memory sharing and the mechanisms used for interprocessor communication.

The processors in a multiprocessor system communicate with each other through shared variables in a common memory. Each computer node in a multicomputer system has a local memory, unshared with other nodes. Interprocessor communication is done through message passing among the nodes.

Explicit vector instructions were introduced with the appearance of vector processors. A vector processor is equipped with multiple vector pipelines that can be concurrently used under hardware or firmware control. There are two families of pipelined vector processors.

Memory-to-memory architecture supports the pipelined flow of vector operands directly from the memory to pipelines and then back to the memory. Register-to-register architecture uses vector registers to interface between the memory and functional pipelines.

Another important branch of the architecture tree consists of the SIMD computers for synchronized vector processing. An SIMD computer exploits spatial parallelism rather than temporal parallelism as in a pipelined computer. SIMD computing is achieved through the use of an array of processing elements synchronized by the same controller. Associative memory can be used to build SIMD associative processors.

Development Layers

Hardware configurations differ from machine to machine, even those of the same model. The address space of a processor in a computer system varies among different architectures. It depends on the memory organization, which is machine-dependent. These features are up to[9]the designer and should match the target application domains.

On the other hand, we want to develop application programs and programming environments which are machine-independent. Independent of machine architecture, the user programs can be ported to many computers with minimum conversion costs. High- level languages and communication models depend on the architectural choices made in a computer system. From a programmer's viewpoint, these two layers should be architecture-transparent.

At present, Fortran, C, Pascal, Ada, and Lisp[10]are supported by most computers. However, the communication models, shared variable versus message passing, are mostly machine-dependent. The Linda approach using tuple spaces offers any architecture- transparent communication model for parallel computers.

Application programmers prefer more architectural transparency. However, kernel programmers have to explore the opportunities supported by hardware. As a good computer architect, one has to approach the problem from both ends. The compilers and OS support should be designed to remove as many architectural constraints as possible from the programmer.

New Challenges

The technology of parallel processing is the outgrowth of four decades of research and industrial advances in microelectronics, printed circuits, high-density packaging, advanced processors, memory systems, peripheral devices, communication channels, language evolution, compiler sophistication, operating systems, programming environments, and application challenges.

The rapid progress made in hardware technology has significantly increased the economical feasibility of building a new generation of computers adopting parallel processing. However, the major barrier preventing parallel processing from entering the production mainstream is on the software and application side.

To date, it is still very difficult and painful to program parallel and vector computers[11]. We need to strive for major progress in the software area in order to create a user-friendly environment for high-power computers. A whole new generation of programmers need to be trained to program parallelism effectively. High-performance computers provide fast and accurate solutions to scientific, engineering, business, social, and defense problems.

Representative real-life problems include weather forecast modeling, computer-aided design of VLSI[12]circuits, large-scale database management, artificial intelligence, crime control, and strategic defense initiatives, just to name a few. The application domains of parallel processing computers are expanding steadily. With a good understanding of scalable computer architectures and mastery of parallel programming techniques the reader will be better prepared to face future computing challenges.

Notes

[1] the Von Neumann architecture: 冯·诺依曼体系结构,由匈牙利科学家Von Neumann于1946年提出。其基本思想是“存储程序”的概念,即把程序与数据存放在线性编址的存储器中,依次取出,进行解释和执行。

[2] Lookahead, Parallelism and Pipelining: 先行(预见)、并行性和流水线技术(管线)。

[3] I/E (instruction fetch/decode and execution):取指令(指令去还)。

[4] Flynn Classification:弗林分类法,M.J. 弗林于1966年提出的、根据系统的指令和数据对计算机系统进行分类的一种方法。

[5] SISD(single instruction stream over a single data stream):单指令单数据流(或single instruction single data).

[6] SIMD (single instruction stream over multiple data streams):单指令多数据流(或single instruction multiple data).

[7] MIMD (multiple Instruction streams over multiple data streams):多指令多数据流(或multiple Instruction multiple data).

[8] MISD (multiple instruction streams and a single data steam):多指令单数据流(或multiple instruction single data).

[9] up to:应由某人担任或负责。如:It is up to them to decide. 应由他们决定。这一句可译为“这些特性由设计者考虑决定”。

[10] Fortran, C, Pascal, Ada, and Lisp: (分别是)Fortran语言、C语言、Pascal语言、Ada语言和Lisp语言。

[11] vector computers:向量计算机;向量电脑;一种数组计算机(an array computer)。

[12] VLSI: very large scale integration超大规模集成电路;大规模积体电路。

点击查看答案
第4题
Parallel Computer Models 并行模式 Parallel processing has emerged as a key enabling technology in

Parallel Computer Models

并行模式

Parallel processing has emerged as a key enabling technology in modern computers, driven by the ever-increasing demand for higher performance, lower costs, and sustained productivity in real-life applications. Concurrent events are taking place in today's high- performance computers due to the common practice of multiprogramming, multiprocessing, or multicomputing.

Parallelism appears in various forms, such as lookahead, pipelining, vectorization, concurrency, simultaneity, data parallelism, partitioning, interleaving, overlapping, multiplicity, replication, time sharing, space sharing, multitasking, multiprogramming, multithreading, and distributed computing at different processing levels.

In this part, we model physical architectures of parallel computers, vector super- computers[1], multiprocessors, multicomputers, and massively parallel processors. Theoretical machine models are also presented, including the parallel random-access machines (PRAMs)[2]and the complexity model of VLSI (very large-scale integration) circuits. Architectural development tracks are identified with case studies in the article. Hardware and software subsystems are introduced to pave the way for detailed studies in the subsequent section.

The State of Computing

Modern computers are equipped with powerful hardware facilities driven by extensive software packages. To assess state-of-the-art[3]computing, we first review historical milestones in the development of computers. Then we take a grand tour of the crucial hardware and software elements built into modern computer systems. We then examine the evolutional relations in milestone architectural development. Basic hardware and software factors are identified in analyzing the performance of computers.

Computer Development Milestones

Computers have gone through two major stages of development: mechanical and electronic. Prior to 1945, computers were made with mechanical or electromechanical parts. The earliest mechanical computer can be traced back to 500 BC in the form of the abacus used in China. The abacus is manually operated to perform decimal arithmetic with carrying propagation digit by digit.

Blaise Pascal built a mechanical adder/subtractor in France in 1642. Charles Babbage designed a difference engine in England for polynomial evaluation in 1827. Konrad Zuse built the first binary mechanical computer in Germany in 1941. Howard Aiken[4]proposed the very first electromechanical decimal computer, which was built as the Harvard Mark I[5]by IBM in 1944. Both Zuse's and Aiken's machines were designed for general-purpose computations.

Obviously, the fact that computing and communication were carried out with moving mechanical parts greatly limited the computing speed and reliability of mechanical computers. Modern computers were marked by the introduction of electronic components. The moving parts in mechanical computers were replaced by high-mobility electrons in electronic computers. Information transmission by mechanical gears or levers was replaced by electric signals traveling almost at the speed of light.

Computer Generations

Over the past five decades, electronic computers have gone through five generations of development. Each of the first three generations lasted about 10 years. The fourth generation covered a time span of 15 years. We have just entered the fifth generation with the use of processors and memory devices with more than 1 million transistors on a single silicon chip.

The division of generations is marked primarily by sharp changes in hardware and software technologies. Most features introduced in earlier generations have been passed to later generations. In other words, the latest generation computers have inherited all the nice features and eliminated all the bad ones found in previous generations.

Elements of Modern Computers

Hardware, software, and programming elements of a modern computer system are briefly introduced below in the context of parallel processing.

Computing Problems

It has been long recognized that the concept of computer architecture is no longer restricted to the structure of the bare machine hardware. A modern computer is an integrated system consisting of machine hardware, an instruction set, system software, application programs, and user interfaces. These system elements are depicted in Fig. 1. The use of a computer is driven by real-life problems demanding fast and accurate solutions. Depending on the nature of the problems, the solutions may require different computing resources.

Parallel Computer Models  并行模式  Parallel processin

For numerical problems in science and technology, the solutions demand complex mathematical formulations and tedious integer or floating-point computations. For alphanumerical problems in business and government, the solutions demand accurate transactions, large database management, and information retrieval operations.

For artificial intelligence (AI) problems, the solutions demand logic inferences and symbolic manipulations. These computing problems have been labeled numerical computing, transaction processing, and logical reasoning. Some complex problems may demand a combination of these processing modes.

Algorithms and Data Structures

Special algorithms and data structures are needed to specify the computations and communications involved in computing problems. Most numerical algorithms are deterministic, using regularly structured data. Symbolic processing may use heuristics or nondeterministic searches over large knowledge bases.

Problem formulation and the development of parallel algorithms often require interdisciplinary interactions among theoreticians, experimentalists, and computer programmers. There are many books dealing with the design and mapping of algorithms or heuristics onto parallel computers. In this article, we are more concerned about the resources mapping problems than the design and analysis of parallel algorithms.

Hardware Resources

The system architecture of a computer is represented by three nested circles on the right in Fig. 1. A modern computer system demonstrates its power through coordinated efforts by hardware resources, an operating system, and application software. Processors, memory, and peripheral devices form the hardware core of a computer system. We will study instruction-set processors, memory organization, multiprocessors, supercomputers, multicomputers, and massively parallel computers.

Special hardware interfaces are often built into I/O devices, such as terminals, workstations, optical page scanners, magnetic ink character recognizers, modems, file servers, voice data entry, printers, and plotters. These peripherals are connected to mainframe computers directly or though local or wide-area networks.

In addition, software interface programs are needed. These software interfaces include file transfer systems, editors, word processors, device drivers, interrupt handlers, network communication programs, etc. These programs greatly facilitate the portability of user programs on different machine architectures.

Operating System

An effective operating system manages the allocation and deal-location of resources during the execution of user programs. We will study UNIXE[6]extensions for muhiprocessors and muhicomputers later. Mach/OS kernel and OSF/1[7]will be specially studied for muhithreaded kernel functions, virtual memory management, file subsystem, and network communication services. Beyond the OS, application software must be developed to benefit the users. Standard benchmark programs are needed for performance evaluation.

Notes

[1] vector super-computers: 向量巨型机体系机构。向量巨型计算机的体系机构,目前大多数仍为多流水线结构,也有的采用并行处理机构。

[2] parallel random-access machines(PRAMs):并行随机存取机器具有任意多个处理器,以及分别用于输入、输出和工作的存储器的机器模型。

[3] state-of-the-art:最新技术水平;当前正在发展的技术,或者在当前应用中保持领先地位的技术。

[4] Howard Aiken: Mark I计算机的设计者。

[5] Harvard Mark I:哈佛Mark I计算机。Mark I计算机是一种在30年代末40年代初由(美国)哈佛大学的Howard Aiken设计并由IBM公司制造的机电式计算器。

[6] UNIX:UNIX操作系统。

[7] Mach/OS kernel and OSF/1:Mach操作系统/OS操作系统,Kernel核心程序。在操作系统中,实现诸如分配硬件资源、进程调度等基本功能的程序,是与硬件机器直接打交道的部分,始终驻留内存。OSF/1开放软件基金会/1。

Choose the best answer for each of the following:

点击查看答案
退出 登录/注册
发送账号至手机
密码将被重置
获取验证码
发送
温馨提示
该问题答案仅针对搜题卡用户开放,请点击购买搜题卡。
马上购买搜题卡
我已购买搜题卡, 登录账号 继续查看答案
重置密码
确认修改