Monday, June 3, 2019

Study of various RISC and CISC processor

Study of various reduced instruction set computing and complex educational activity set computing processorINTRODUCTIONThe processor (CPU, for Central Processing Unit) is the computers brain. It allows the processing of numeric data, kernel information entered in binary form, and the execution of book of readings stored in retention. The first microprocessor (Intel 4004) was invented in 1971. It was a 4-bit calculation device with a speed of 108 kHz. Since then, microprocessor power has cock-a-hoop exponentially.OperationThe processor (called CPU, for Central Processing Unit) is an electronic circuit that operates at the speed of an internal quantify thanks to a quartz crystal that, when subjected to an electric currant, send pulses, called peaks. The measure speed (also called cycle), corresponds to the number of pulses per second, written in Hertz (Hz). Thus, a 200 MHz computer has a clock that sends 200,000,000 pulses per second.With each clock peak, the processor perfor ms an action that corresponds to an instruction or a part there of. A measure called CPI (Cycles Per Instruction) gives a representation of the average number of clock cycles call for for a microprocessor to execute an instruction. A microprocessor power can thus be characterized by the number of operating book of instructions per second that it is capable of processing. MIPS (millions of instructions per second) is the unit purposed and corresponds to the processor frequency divided by the CPI.One of the primary goals of computer architects is to innovation computers that argon more damage effective than their predecessors. Cost-effectiveness includes the cost of ironw are to manufacture the machine, the cost of programming, and costs incurred related to the architecture in debugging.Both the initial hardware and subsequent programs. If we review the history of computer families we find that the near common architectural change is the trend toward ever more decomposable ma chines. Presumably this additional conglomerateity has a positive trade off with estimate to the cost effectiveness of newer models.The Microprocessor Revolution-The engine of the computer revolution is the microprocessor. It has led to new inventions, such(prenominal) as FAX machines and personal computers, as hale as adding intelligence to existing devices, such as wristwatches and automobiles. Moreover, its performance has ameliorated by a factor of roughly 10,000 in the 25 years since its birth in 1971.This increase coincided with the introduction of Reduced Instruction Set Computers (RISC). The instruction chastise is the hardware speech in which the software tells the processor what to do. Surprisingly, reducing the size of the instruction set eliminating certain instructions based upon a careful quantitative analysis, and requiring these seldom-used instructions to be emulated in software can lead to higher performance, for several(prenominal) reasons-REASONS FOR INCR EASED COMPLEXITYSpeed of Memory vs. Speed of CPU-.from the 701 to the 709 Cocke80. The 701 CPU was about ten times as fast as the core of import memory this made any primitives that were implemented as subroutines a great deal slower than primitives that were instructions. 709 more cost-effective than the 701. Since then, many higher-level instructions have been added to machines in an attempt to improve performance.Micro code and LSI Technology-Microprogrammed maintain allows the implementation of complex architectures more cost-effectively than hardwired control.Advances in integrated circuit memories made in the late 60s and early 70s have caused microprogrammed control to be the more cost-effective approach in almost every case. Once the decision is made to use microprogrammed control, the cost to expand an instruction set is very small only a few more words of control store.Examples of such instructions are string editing, integer-to-floating conversion, and mathematical ope rations such as polynomial evaluation.Code Density-With early computers, memory was very expensive. It was therefore cost effective to have very compact programs.Attempting to obtain code density by increasing the complexity of the instruction set is often a double-edged the cost of 10% more memory is often furthest cheaper than the cost of squeezing 10% out of the CPU by architectural innovations.Marketing Strategy-Unfortunately, the primary goal of a computer company is not to design the most cost-effective computer the primary goal of a computer company is to make the most money by selling computers. In order to sell computers manufacturers must convince customers that their design is superior to their competitors.In order to keep their jobs, architects must keep selling new and better designs to their internal management. upwardly Compatibility-Coincident with marketing system is the perceived need for upward compatibility. Upward compatibility means that the primary way to im prove a design is to add new, and unremarkably more complex, features. Seldom are instructions or addressing modes removed from an architecture, resulting in a gradual increase in both the number and complexity of instructions over a series of computers.Support for High Level Languages-As the use of high level languages becomes increasingly popular, manufacturers have become eager to retire from alone more powerful instructions to animation them. Unfortunately there is little evidence to suggest that any of the more complicated instruction sets have in truth provided such support.The effort to support high-level languages is laudable, but we feel that often the focus has been on the wrong issues.Use of Multiprogramming-The rise of timesharing required that computers be able to respond to interrupts with the ability to halt an executing process and restart it at a later time. Memory management and paging additionally required that instructions could be halted before completion an d later restarted.RISC(Reduced Instruction Set Computing)The acronym RISC (pronounced risk), for reduced instruction set computing, represents a CPU design strategy emphasizing the insight that simplified instructions that do slight may still provide for higher performance if this simplicity can be apply to make instructions execute very quickly. Many proposals for a precise definition have been attempted, and the term is being slowly replaced by the more descriptive load-store architecture.Being an old idea, some aspects attributed to the first RISC-labeled designs (around 1975) include the observations that the memory restricted compilers of the time were often unable to take advantage of features intended to facilitate coding, and that complex addressing inherently takes many cycles to perform. It was argued that such functions would better be performed by sequences of unprejudicedr instructions, if this could yield implementations simple(a) abundant to cope with really high frequencies, and small enough to leave room for many registers, factoring out slow memory accesses. Uniform, fixed length an instruction with arithmetics restricted to registers was chosen to ease instruction pipelining in these simple designs, with special load-store instructions accessing memory.The RISC Design Strategies-The basic RISC principle A simpler CPU is a faster CPU.The focus of the RISC design is decrease of the number and complexity of instructions in the ISA.A number of the more common strategies include1) Fixed instruction length, generally one word.This simplifies instruction fetch.2) simplified addressing modes.3) less and simpler instructions in the instruction set.4) Only load and store instructions access memoryno add memory to register, add memory to memory, etc.5) Let the compiler do it. Use a good compiler to break complex high-level language statements into a number of simple assembly language statements.Typical characteristics of RISC-For any given leve l of general performance, a RISC chip will typically have out-of-the-way(prenominal) fewer transistors dedicated to the core logic which originally allowed designers to increase the size of the register set and increase internal parallelism.Other features, which are typically instal in RISC architectures, areUniform instruction format, using a single word with the opcode in the same bit positions in every instruction, demanding less decodingIdentical general purpose registers, allowing any register to be used in any context, simplifying compiler design (although normally there are separate floating point registers)Simple addressing modes. Complex addressing performed via sequences of arithmetic and/or load-store operations. Fixed length instructions which(a) are easier to decode than variable length instructions, and(b) use fast, inexpensive memory to execute a braggart(a)r piece of code. Hardwired controller instructions (as opposed to microcoded instructions). This is where RIS C really shines as hardware implementation of instructions is much faster and uses less silicon real estate than a microstore area. Fused or compound instructions which are heavily optimized for the most usually used functions. Pipelined implementations with goal of executing one instruction (or more) per machine cycle. Large uniform register set minimal number of addressing modes no/minimal support for misaligned accesses.RISC Examples- Apple iPods (custom ARM7TDMI SoC) Apple iPhone (Samsung ARM1176JZF) Palm and PocketPC PDAs and smartphones (Intel XScale family, Samsung SC32442 ARM9) Nintendo Game Boy Advance (ARM7) Nintendo DS (ARM7, ARM9) Sony Network Walkman (Sony in-house ARM based chip)Advantages of RISC* Speed* Simpler hardware* Shorter design cycle* User (programmers benifits)Disadvantages Of RISCq A more sophisticated compiler is requiredq A sequence of RISC instructions is needed to implement complex instructions.q Require very fast memory systems to feed them instructi ons.q Performance of a RISC application depend critically on the quality of the code generated by the compiler.complex instruction set computer(complex instruction set computer)A complex instruction set computer (CISC, pronounced like sisk) is a computer instruction set architecture (ISA) in which each instruction can execute several low-level operations, such as a load from memory, an arithmetic operation, and a memory store, all in a single instruction.Performance-Some instructions were added that were never intended to be used in assembly language but fit well with compiled high level languages. Compilers were updated to take advantage of these instructions. The benefits of semantically rich instructions with compact encodings can be seen in modern processors as well, particularly in the high performance segment where caches are a commutation component (as opposed to most embedded systems). This is because these fast, but complex and expensive, memories are inherently peculiar(a ) in size, making compact code beneficial. Of course, the fundamental reason they are needed is that main memories (i.e. dynamic RAM today) remain slow compared to a (high performance) CPU-core.ADVANTAGES OF CISC* A new processor design could incorporate the instruction set of its predecessor as a subset of an ever-growing languageno need to reinvent the wheel, code-wise, with each design cycle.* Fewer instructions were needed to implement a particular computing task, which led to lower memory use for program storage and fewer time-consuming instruction fetches from memory.* Simpler compilers sufficed, as complex CISC instructions could be written that closely resembled the instructions of high-level languages. In effect, CISC made a computers assembly language more like a high-level language to begin with, leaving the compiler less to do.DISADVANTAGES OF CISC* The first advantage listed above could be viewed as a disadvantage. That is, the incorporation of older instruction sets in to new generations of processors tended to force growing complexity.* Many specialized CISC instructions were not used frequently enough to justify their existence. The existence of each instruction needed to be justified because each one requires the storage of more microcode at in the central processing unit (the final and lowest layer of code translation), which must be built in at some cost.* Because each CISC look across must be translated by the processor into tens or even hundreds of lines of microcode, it tends to run slower than an equivalent series of simpler commands that do not require so much translation. All translation requires time.* Because a CISC machine builds complexity into the processor, where all its various commands must be translated into microcode for actual execution, the design of CISC hardware is more difficult and the CISC design cycle correspondingly long this means delay in getting to market with a new chip. similarity of RISC and CISCThis table is t aken from an IEEE tutorial on RISC architecture.CISC Type ComputersRISC TypeIBM 370/168VAX-11/780Intel 8086RISC IIBM 801Developed19731978197819811980 book of instructions20830313331120Instruction size (bits)16 4816 4568 323232Addressing Modes422633General Registers1616413832Control Memory Size420 Kb480 KbNot given00Cache Size64 Kb64 KbNot given0Not givenHowever, nowadays, the difference between RISC and CISC chips is getting smaller and smaller. RISC and CISC architectures are becoming more and more alike. Many of todays RISC chips support just as many instructions as yesterdays CISC chips. The PowerPC 601, for example, supports more instructions than the Pentium. Yet the 601 is considered a RISC chip, while the Pentium is definitely CISC.RISCs are leading in-* New machine designs* explore funding* Publications* Reported performance* CISCs are leading in* REVENUEPerformance* The CISC approach attempts to minimize the number of instructions per program, sacrificing the number of cycles per instruction.* RISC does the opposite, reducing the cycles per instruction at the cost of the number of instructions per program.* Hybrid solutions* RISC core CISC interface* Still has specific performance correctFuture AspectsTodays microprocessors are roughly 10,000 times faster than their ancestors. And microprocessor-based computer systems now cost only 1/40th as much as their ancestors, when rising prices is considered. The result an overall cost-performance improvement of roughly 1,000,000, in only 25 years This extraordinary be on is why computing plays such a large role in todays world. Had the research at universities and industrial laboratories not occurred had the complex interplay between government, industry, and academia not been so successful a comparable advance would still be years away.Microprocessor performance can continue to double every 18 months beyond the turn of the century. This rate can be bear on by continued research innovation. Signific ant new ideas will be needed in the next decade to continue the pace such ideas are being developed by research groups today.ConclusionThe research that led to the development of RISC architectures represented an important shift in computer science, with tenseness moving from hardware to software. The eventual dominance of RISC technology in high-performance workstations from the mid to late 1980s was a deserved success.In late years CISC processors have been designed that successfully overcome the limitations of their instruction set architecture that is more elegant and power-efficient, but compilers need to be alter and clock speeds need to increase to match the aggressive design of the latest Intel processors.REFERENCESBooks1. Computer system Architecture by M. Morris Mano2. Processor Archicture by jurij silc, Borut Robic3. George Radin, The 801 Minicomputer, IBM Journal of question and Development, Vol.27 No.3, 19834. John Cocke and V. Markstein, The evolution of RISC techn ology at IBM, IBM Journal of Research and Development, Vol.34 No.1, 19905. Dileep Bhandarkar, RISC versus CISC A Tale of Two Chips, Intel Corporation, Santa Clara, CaliforniaEncyclopedia1. Encarta2. Britanica

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.