Building a RISC System in an FPGA Part 2

introduced his plan to build a pipelined 16bit RISC processor and System-on-aChip in an FPGA. This month, he explores the CPU pipeline and designs the control unit. Listen up, because next month, he’ll tie it all together | CIRCUIT CELLAR THE MAGAZINE FOR COMPUTER APPLICATIONS Building a RISC System in an FPGA FEATURE ARTICLE Jan Gray Part 2 Pipeline and Control Unit Design In Part 1 Jan introduced his plan to build a pipelined 16-bit RISC processor and System-on-a-Chip in an FPGA. This month he explores the CPU pipeline and designs the control unit. Listen up because next month he ll tie it all together. ast month I discussed the instruction set and the datapath of an xr16 16-bit RISC processor. Now I ll explain how the control unit pushes the datapath s buttons. Figure 2 in Part 1 Circuit Cellar 116 showed the CTRL16 control unit schematic symbol in context. Inputs include the RDY signal from the memory controller the next instruction word INSN15 0 from memory and the zero negative carry and overflow outputs from the datapath. The control unit outputs manage the datapath. These outputs include pipeline control clock enables register and operand selectors ALU controls and result multiplexer output enables. Before designing the control circuitry first consider how the pipeline behaves in both good and bad times. PIPELINED EXECUTION To increase instruction throughput the xr16 has a three-stage pipeline instruction fetch IF decode and operand fetch DC and execute EX . In the IF stage it reads memory at the current PC address captures the resulting instruction word in the instruction register IR and increments PC for the next cycle. In the DC stage the instruction is decoded and its operands are read from the register file or extracted from an immediate field in the IR. In the EX stage the function units act upon the operands. One result is driven through three-state buffers onto the result bus and is written back into the register file as the cycle ends. Consider executing a series of instructions assume no memory wait states. In every pipeline cycle fetch a new instruction and write back its result two cycles later. You simultaneously prepare the next instruction address PC 2 fetch t1

Không thể tạo bản xem trước, hãy bấm tải xuống
TÀI LIỆU MỚI ĐĂNG
30    262    2    29-05-2024
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.