首页 > 汉语言文学> 外国文学史
题目内容 (请给出正确答案)
[主观题]

Evolution of Computer Architecture 计算机体系的演变 The study of computer architecture involves bo

Evolution of Computer Architecture

计算机体系的演变

The study of computer architecture involves both hardware organization and programming/software requirements. As seen by an assembly language programmer, computer architecture is abstracted by its instruction set, which includes operation codes (opcode for short), addressing modes, registers, virtual memory, etc.

Evolution of Computer Architecture  计算机体系的演变  The

Legends:

I/E: Instruction Fetch and Execute

SIMD: Single Instruction Streams and Multiple Data Streams

MIMD: Multiple Instruction Streams and Multiple Data Streams Figure 1Tree Showing Architectural Evolution from Sequential Scalar Computers to Vector Processors and Parallel Computers

From the hardware implementation point of view, the abstract machine is organized with CPUs, caches, buses, microcodes, pipelines, physical memory, etc. Therefore, the study of architecture covers both instruction-set architectures and machine implementation organizations.

Over the past four decades, computer architecture has gone through evolutional rather than revolutional changes. Sustaining features are those that were proven performance deliverers, we started with the Von Neumann architecture[1]built as a sequential machine executing scalar data. The sequential computer was improved from bit-serial to word- parallel operations, and from fixed-point to floating-point operations. The Von Neumann architecture is slow due to sequential execution of instructions in programs.

Lookahead, Parallelism and Pipelining[2]

Lookahead techniques were introduced to prefetch instructions in order to overlap I/E (instruction fetch/decode and execution)[3]operations and to enable functiorial parallelism. Functional parallelism was supported by two approaches: One is to use multiple functional units simultaneously, and the other is to practice pipelining at various processing levels.

The latter includes pipelined instruction execution, pipelined arithmetic computations, and memory-access operations. Pipelining has proven especially attractive in performing identical operations repeatedly over vector data strings. Vector operations were originally carried out implicitly by software-controlled looping using scalar pipeline processors.

Flynn's Classification[4]

Flynn introduced a classification of various computer architectures based on notions of instruction and data streams in 1972. Conventional sequential machines are called SISD (single instruction stream over a single data stream)[5]computers. Vector computers are equipped with scalar and vector hardware or appear as SIMD (single instruction stream over multiple data streams)[6]machines. Parallel computers are reserved for MIMD (multiple Instruction streams over multiple data streams)[7]machines.

An MISD (multiple instruction streams and a single data steam)[8]machines are modeled. The same data stream flows through a linear array of processors executing different instruction streams. This architecture is also known as systolic arrays for pipelined execution of specific algorithms.

Of the four machine models, most parallel computers built in the past assumed the MIMD model for general-purpose computations. The SIMD and MISD models are more suitable for special-purpose computations. For this reason, MIMD is the most popular model, SIMD next, and MISD the least popular model being applied in commercial machines.

Parallel Computers

Intrinsic parallel computers are those that execute programs in MIMD mode. There are two major classes of parallel computers, namely, shared-memory multiprocessors and message-passing multicomputers. The major distinction between multiprocessors and multicomputers lies in memory sharing and the mechanisms used for interprocessor communication.

The processors in a multiprocessor system communicate with each other through shared variables in a common memory. Each computer node in a multicomputer system has a local memory, unshared with other nodes. Interprocessor communication is done through message passing among the nodes.

Explicit vector instructions were introduced with the appearance of vector processors. A vector processor is equipped with multiple vector pipelines that can be concurrently used under hardware or firmware control. There are two families of pipelined vector processors.

Memory-to-memory architecture supports the pipelined flow of vector operands directly from the memory to pipelines and then back to the memory. Register-to-register architecture uses vector registers to interface between the memory and functional pipelines.

Another important branch of the architecture tree consists of the SIMD computers for synchronized vector processing. An SIMD computer exploits spatial parallelism rather than temporal parallelism as in a pipelined computer. SIMD computing is achieved through the use of an array of processing elements synchronized by the same controller. Associative memory can be used to build SIMD associative processors.

Development Layers

Hardware configurations differ from machine to machine, even those of the same model. The address space of a processor in a computer system varies among different architectures. It depends on the memory organization, which is machine-dependent. These features are up to[9]the designer and should match the target application domains.

On the other hand, we want to develop application programs and programming environments which are machine-independent. Independent of machine architecture, the user programs can be ported to many computers with minimum conversion costs. High- level languages and communication models depend on the architectural choices made in a computer system. From a programmer's viewpoint, these two layers should be architecture-transparent.

At present, Fortran, C, Pascal, Ada, and Lisp[10]are supported by most computers. However, the communication models, shared variable versus message passing, are mostly machine-dependent. The Linda approach using tuple spaces offers any architecture- transparent communication model for parallel computers.

Application programmers prefer more architectural transparency. However, kernel programmers have to explore the opportunities supported by hardware. As a good computer architect, one has to approach the problem from both ends. The compilers and OS support should be designed to remove as many architectural constraints as possible from the programmer.

New Challenges

The technology of parallel processing is the outgrowth of four decades of research and industrial advances in microelectronics, printed circuits, high-density packaging, advanced processors, memory systems, peripheral devices, communication channels, language evolution, compiler sophistication, operating systems, programming environments, and application challenges.

The rapid progress made in hardware technology has significantly increased the economical feasibility of building a new generation of computers adopting parallel processing. However, the major barrier preventing parallel processing from entering the production mainstream is on the software and application side.

To date, it is still very difficult and painful to program parallel and vector computers[11]. We need to strive for major progress in the software area in order to create a user-friendly environment for high-power computers. A whole new generation of programmers need to be trained to program parallelism effectively. High-performance computers provide fast and accurate solutions to scientific, engineering, business, social, and defense problems.

Representative real-life problems include weather forecast modeling, computer-aided design of VLSI[12]circuits, large-scale database management, artificial intelligence, crime control, and strategic defense initiatives, just to name a few. The application domains of parallel processing computers are expanding steadily. With a good understanding of scalable computer architectures and mastery of parallel programming techniques the reader will be better prepared to face future computing challenges.

Notes

[1] the Von Neumann architecture: 冯·诺依曼体系结构,由匈牙利科学家Von Neumann于1946年提出。其基本思想是“存储程序”的概念,即把程序与数据存放在线性编址的存储器中,依次取出,进行解释和执行。

[2] Lookahead, Parallelism and Pipelining: 先行(预见)、并行性和流水线技术(管线)。

[3] I/E (instruction fetch/decode and execution):取指令(指令去还)。

[4] Flynn Classification:弗林分类法,M.J. 弗林于1966年提出的、根据系统的指令和数据对计算机系统进行分类的一种方法。

[5] SISD(single instruction stream over a single data stream):单指令单数据流(或single instruction single data).

[6] SIMD (single instruction stream over multiple data streams):单指令多数据流(或single instruction multiple data).

[7] MIMD (multiple Instruction streams over multiple data streams):多指令多数据流(或multiple Instruction multiple data).

[8] MISD (multiple instruction streams and a single data steam):多指令单数据流(或multiple instruction single data).

[9] up to:应由某人担任或负责。如:It is up to them to decide. 应由他们决定。这一句可译为“这些特性由设计者考虑决定”。

[10] Fortran, C, Pascal, Ada, and Lisp: (分别是)Fortran语言、C语言、Pascal语言、Ada语言和Lisp语言。

[11] vector computers:向量计算机;向量电脑;一种数组计算机(an array computer)。

[12] VLSI: very large scale integration超大规模集成电路;大规模积体电路。

查看答案
答案
收藏
如果结果不匹配,请 联系老师 获取答案
您可能会需要:
您的账号:,可能还需要:
您的账号:
发送账号密码至手机
发送
安装优题宝APP,拍照搜题省时又省心!
更多“Evolution of Computer Architec…”相关的问题
第1题
A.numberB.countC.fingerD.compute

A.number

B.count

C.finger

D.compute

点击查看答案
第2题
.Duringthegraduation_______,thepresidentgaveawonderfulopeningspeech.

A.evolution

B.sign

C.individual

D.ceremony

点击查看答案
第3题
The theory of _______ was developed by Darwin and is now understood by nearly everybody in the world.

A.determination

B.distribution

C.discipline

D.evolution

点击查看答案
第4题
The main idea of Paragraph 5 is ______. A.to tell people there are various products against compute

The main idea of Paragraph 5 is ______.

A.to tell people there are various products against computer viruses

B.that there are 1100 known viruses now plaguing personal computers

C.some of virus protection products have their strengths and weaknesses as well

D.that the existing virus protection products can destroy all kinds of viruses

点击查看答案
第5题
Which of the following is not true? A.In 1959, IBM made its first commercial transistorized compute

Which of the following is not true?

A.In 1959, IBM made its first commercial transistorized computer.

B.In 1961, the first commercial integrated circuit came into being.

C.In 1962, the second generation of computers started.

D.In 1963, the first advance was the production of the Model 33 keyboard.

点击查看答案
第6题
请根据以下内容回答 21~25 题: Passage Two 第 21 题 The author regards growing lab mea

请根据以下内容回答 21~25 题:

Passage Two

第 21 题 The author regards growing lab meat as_________ .

A. an impractical idea

B. an inevitable evolution

C. a waste of time and money

D. a way to improve people's health

点击查看答案
第7题
选出应填入下面一段英语中______内的正确答案。 Although the bulk ofindustry resources and energies hav

选出应填入下面一段英语中______内的正确答案。

Although the bulk ofindustry resources and energies have focused on developing the fastest(1)or slickest(2), more and more mindshare is turning to the evolution of the computer interface. Advancements in the areas ofinput device,(3)processing and virtual reality could lead to fundamental changes in the way human and computer interact. The technological battlefield of the future will be adding layers between the user and the raw machine to make the(4)as invisible as possible.(5)represents the next evolutionary step for the interface.

供选择的答案:

(1) voice (2) microprocessor (3) GUI (4) workstation

(5) application software (6) operating system

(7) interface (8) DBMS (9) virtual reality (10) eye-tracking device

点击查看答案
第8题
2016年大学生物专业英语期末考试英文短文2翻译答案

将英语短文译为中文

2. Kin Recognition (10分)

Many organisms, from sea squirts to primates, can identify their relatives. Understanding how and why they do so has prompted new thinking about the evolution of social behavior. by David W. Pfennig and Paul W. Sherman Kinship is a basic organizing principle of all societies. Humans possess elaborate means by which to identify relatives, such as using surnames and maintaining detailed genealogies.

Mechanisms for distinguishing kin also occur throughout the plant and animal kingdoms regardless of an organism’s social or mental complexity, in creatures as diverse as wildflowers and wasps. Scientists are beginning to discover that an understanding of the origin and mechanisms of kin recognition offers fresh insights into such diverse topics as how living things choose their mates, how they learn and how their immune system works.

BELDING’S GROUND SQUIRRELS live in groups in which mothers, daughters and sisters cooperate extensively. By using odors, the squirrels can distinguish familiar nestmates, who are close kin, from nonnestmates. They can also discriminate between full sisters and half sisters.

点击查看答案
第9题
Software Security 软件安全 We live in a world today where software is pervasive. Software touches

Software Security

软件安全

We live in a world today where software is pervasive. Software touches nearly every aspect of our lives, from software-controlled subways, air traffic control systems, nuclear power plants, and medical equipment to more mundane everyday examples, such as software-controlled microwave ovens, gas burners, elevators, automated teller machines[1], the family car, and the local 911 service[2]. In the past, many of these items relied upon established safety and reliability principles from electrical, mechanical, and/or civil engineering, which developed over several decades, if not longer. Today items like these are controlled by software.

When it is examined, its totality, the magnitude of the software safety and reliability challenge facing us today makes the Y2K[3]problem look minuscule by comparison. Hence, it is time to acknowledge the discipline of software safety and reliability and its importance to everyday life. Some people and organizations are starting to understand and respond to this challenge. For example, the FBI[4]recently established a National Infrastructure Protection Center to protect safety-critical systems and software. Unfortunately, many still remain blissfully unaware of the situation or deny its existence. Contributing to the problem is the small number of universities that offer courses in software safety and reliability.

We hear a lot about the global economy today. Technology has less respect for state or national borders than do market forces. The software safety and reliability challenge is a global challenge. Products, such as cars and medical devices, are built in one jurisdiction and sold worldwide. Air traffic control systems must interoperate safely and reliably among multiple countries, for example along the long borders between the U. S. , Canada, and Mexico. Accordingly, the first part of this book introduces the concept of software safety and reliability, and techniques and approaches used to achieve and assess it.

Background

The inherent complexity of software—its design, development, assessment, and use—is and has been increasing rapidly during the last decade. The cycle time between new versions of system and application software has decreased from a number of years to a number of months. The evolution and discovery of new design techniques and development methodologies are proceeding at an equally rapid pace. Consequently, the debate about what constitutes the standard body of knowledge for Computer Science professionals continues.

Accompanying this is the ever broadening role that software plays in electronic products. A study performed in the U. K. in 1990 estimated that the market for the development of safety-related software was $. 85B per year and that it was growing at a rate of 20 percent per year. This is due to the fact that software is replacing discrete hardware logic in many devices. Some common examples include air traffic control systems, nuclear power plant control systems, and radiation therapy systems. In addition, advanced electronics with embedded software controllers are being incorporated into a variety of new products, such as laser surgical devices, automobiles, subways, and intelligent transportation systems.

As such the role of software has moved from simply generating financial or other mathematical data to monitoring and controlling equipment, which directly affects human life and safety. In fact, it was reported by Donald Mackenzie that "the total number of people killed by computer system failures, worldwide, up to the end of 1998 is between 1,000 and 3,000. "

As a result, a more thorough and widespread understanding of, and familiarity with the specialized techniques to achieve and assess the safety and reliability of software, are needed in academia, industry, and government. This is also true since many legal issues related to software liability are evolving.

Purpose

While the general concept of safety and reliability is understood by most parties, the specialty of software safety and reliability is not. The understanding of electronic component reliability and electrical safety has been evolving since the 1940s. In contrast, software safety and reliability is a relatively new discipline that only a few understand well or at all. Hence, the overall goal of writing this book is to improve the state of the art of software safety and reliability, both its understanding and practice. This goal is achieved through three objectives.

The first objective of this book is to serve as a "consciousness raising"[5]about the importance of software safety and reliability and the attention this subject warrants in mission critical systems[6]. As more and more functionality is shifted from hardware to software, two common scenarios occur. First, managers and technical personnel involved in mission critical projects are generally very knowledgeable about optics, radiation physics, mechanical engineering, and so forth. However, they are sometimes at a loss when it comes to knowing: 1) what to do about software safety and reliability; 2) the skill set that is needed to adequately address software safety and reliability; and 3) sometimes even that this subject warrants serious attention. Second, today there are many excellent Computer Science and Software Engineering programs at universities throughout the worlD. Unfortunately, very few of them offer any courses on software safety and reliability or on software engineering standards. A student may acquire a thorough background in software engineering without being exposed to the field of software safety and reliability. Given the shift in technology to software controlled products, this is unfortunate because today's students will be tomorrow's safety and reliability practitioners. This book has been written to serve as a "consciousness raising" for both scenarios. As such, it includes many illustrative everyday examples about the importance of software safety and reliability.

The second objective of this book is to provide practical information about the current methods used to achieve and assess software safety and reliability. This is accomplished by a comprehensive discussion of the current approaches promoted by key industrial sectors and standards organizations to software safety and reliability. Since most practitioners were not taught software safety and reliability in school, it is all the more imperative that they be made aware of current software safety and reliability standards[7]. As a rule, standards are written in a very terse style. A phrase or sentence may be very meaningful to the committee members who spent years writing the standard, but the same phrase leaves the average reader in the dark. Accordingly, Parts Ⅱ and Ⅲ of this book have been written in the style of an application guide—" how to" read, interpret, and implement a given standarD. While theory is not entirely neglected, the emphasis is on practical information.

The third and final objective of this book is to bring together, for the first time, in one volume the contemporary thinking on software safety and reliability so that it can be compared and analyzed; thereby leading to the improved understanding and practice of this field in the future.

Firewall

Nations without controlled borders cannot ensure the security and safety of their citizens, nor can they prevent piracy and theft. Networks without controlled access cannot ensure the security or privacy of stored data, nor can they keep network resources from being exploited by hackers.

The communication efficiency provided by the Internet has caused a rush to attach private networks directly to it. Direct Internet connections make it easy for hackers to exploit private network resources. Prior to the Internet, the only widely available way for a hacker to connect from home to a private network was direct dialing with modems and the public telephone network. Remote access security was a relatively small issue.

When you connect our private network to the Internet, you are actually connecting your network directly to everv other network attached to the Internet. There's no inherent central point of security control.

Firewalls are used to create security checkpoints at the boundaries of private networks. By providing the routing function between the private network and the Internet, firewalls inspect all communications passing between the two networks and either pass or drop the communications depending on how they match the programmed policy rules. If your firewall is properly configured and contains no serious exploitable bugs, your network will be as free from risk as possible.

Firewalls are among the newest developments in Internet technology. Developed from rudimentary security systems that major computer vendors like Compact and IBM developed to secure their own networks in the mid 1980s, these network sentinels have developed in lock-step with the burgeoning threat of information warfare. The most interesting and innovative developments, like Network Address Translation and multi-layer security filtering, are so new that books just two years old are already obsolete.

The security problems of the past could be solved with simple packet filters and dial- back modem banks. The security problems of the future will require rifling through and validating every byte of an Internet message, requiring encrypted certification of a web site's true identity before connecting, and then encrypting nearly everything that travels between. Fortunately, as technology and the technological society it mirrors progress, these measures will become simple and invisible. As vendors make operating systems more hardened against attack, the World Wide Web will secretly grow more secure for people who will freely surf the Web as they please, hampered only by the occasionally warning that a site is not accredited or that a message contains suspicious content. This is as it should be.

The security problems of today are most effectively solved with firewalls and virtual private tunnels. Peripheral security utilities[8]like intrusion detectors and security scanners do their part to alarm and alert, but firewalls will remain the foundation of Internet security until their functionality is built into the very protocols upon which the Internet operates and until every Internet-connected computer contains the equivalent of a firewall. Even then, centralized management of Internet policy may make firewalls a permanent addition to corporate networking.

Notes

[1]automated teller machines:自动取款机,简写成ATM。

[2]911 service:在美国等一些西方国家,紧急救护号码为9ll。

[3]Y2K(Year 2000):电脑千年虫。

[4]the FBI:(美国)联邦调查局(Federal Bureau of Investigation)的缩写。

[5]consciousness raising:提高意识。

[6]mission critical systems:任务是至关重要的系统。

[7]It is...从句中用should+do,should常可省,如:It is important that he start early tomorrow.

[8] Peripheral security utilities: 外围(部)安全设备。

Choose the best answer:

点击查看答案
退出 登录/注册
发送账号至手机
密码将被重置
获取验证码
发送
温馨提示
该问题答案仅针对搜题卡用户开放,请点击购买搜题卡。
马上购买搜题卡
我已购买搜题卡, 登录账号 继续查看答案
重置密码
确认修改