Section 1
CPU Bus

This section discusses topics that are directly related to the CPU bus, including how the 660 Bridge decodes CPU initiated transfers as a function of the transfer type and address range. For more information, refer to The IBM27-82660 PowerPC to PCI Bridge User's Manual (660 Bridge User's Manual). See Figure 1 for a layout of the CPU bus.

The term 604, as used herein, refers to the PowerPC 604 family of CPUs.

1.1 CPU Busmasters

The reference design motherboard has no imbedded CPU. Instead, it supports one or two CPU busmaster cards that plug into the system bus via connectors. There are two CPU card connectors or slots. Each CPU busmaster must conform to the 604 bus interface specification. It must look like a 604 to the system bus.

At least one CPU must be installed. This card can feature either a 604 or a 604 plus a serial L2 controller which looks like a 604 to the system bus. Since the 660 Bridge parks the CPU bus on CPU1, uniprocessor systems will perform better while the CPU card is installed in CPU slot 1.

CPU slot 2 can contain either a second CPU card for SMP operation, or a non-604 device (such as an L2 cache or other CPU bus agent) that complies with the 604 bus interface specification. Use only CPU cards of similar configuration and capability to implement SMP.

Both CPU cards are supported by the motherboard over a CPU bus interface that contains full interrupt, clocking, and card ID functions. The reference design offers robust support for both uniprocessor and SMP systems.

One level of address bus pipelining is supported. One level of data write is posted. Precise exceptions are reported via TEA#, and imprecise exceptions are reported via MCP#. PIO or programmed I/O transactions (XATS# type) are not supported.

1.1.1 CPU Bus Arbitration

The reference design supports three busmasters on the CPU bus (two CPU busmaster cards and the 660 Bridge). CPU bus arbitration between the three busmasters is handled by the arbiter in the 660 Bridge, which coordinates the activities of the three CPU bus agents; CPU1 (in CPU card slot 1), CPU2 (in CPU card slot 2), and the 660 Bridge snoop engine. The snoop engine is a conceptual set of logic inside the Bridge which broadcasts snoop cycles to the CPU bus in response to PCI to memory transactions. To minimize CPU1 to memory latency, the 660 Bridge parks the CPU bus on CPU1 while the bus is idle.

For more information on CPU bus arbitration, see The 660 Bridge User's Manual.

1.1.2 Fast L2/Data Streaming Mode (No-DRTRY#)

The reference design default configuration is for DRTRY# mode. For use with CPU cards that use no-DRTRY# (fast L2/data streaming) mode, the reference motherboard can be reconfigured. Remove R232 to isolate the DBB# on each CPU card from the other CPU card. Each installed CPU card must provide a pullup on DBB.

In no-DRTRY# mode, each CPU card must also provide correct generation of DRTRY# for the CPU on the card. For the PowerPC 604 CPU card, wire DRTRY# of the CPU to #HRESET of the connector (rather than to connector DRTRY#). This will provide a low level on the CPU DRTRY# at reset and a high level under normal operation, which will place the 604 CPU in fast L2/data streaming mode.

1.1.3 CPU Bus Frequency

The reference design supports CPU bus speeds of 60MHz and 66MHz, and PCI bus speeds of 30MHz and 33MHz. The reference design is initially configured with a CPU:PCI bus clock ratio of 2:1. The CPU:PCI clock ratio can be changed to 1:1 by reconfiguring the PCI PLL and the 660 Bridge, as long as other system considerations are handled correctly. The 3:1 mode is not available due to limitations of the PCI PLL. See Section for more information on clock issues.

1.1.4 Bi-Endian Mode Operation

Bi-endian mode operation is discussed in the Endian section.

1.2 System Response by CPU Bus Transfer Type

All access to the rest of the system is provided to the CPU by the 660 Bridge. Table 1 shows 660 Bridge decoding of CPU bus transfer types. Based on TT[0:3], the 660 Bridge responds to CPU bus master cycles by generating a read transaction, a write transaction, or an address-only response. The 660 Bridge ignores TT[4] when it evaluates the transfer type.

The bridge decodes the target of the transaction based on the address range of the transfer as shown in Table 2. The transfer type decoding shown in Table 1 combines with the target decoding to produce one of the following:

Table 1. TT[0:3] (Transfer Type) Decoding by 660 Bridge


TT[0:3]


60X Operation

60X Bus Transaction

660 Bridge Operation For CPU to Memory Transfers

660 Bridge Operation For CPU to PCI Transactions

0000

Clean block or lwarx

Address only

Asserts AACK#. No other response. No PCI transaction.

0001

Write with flush

SBW(1)
or burst

Memory write operation.

PCI write transaction.

0010

Flush block or stwcx

Address only

Asserts AACK#. No other response. No PCI transaction.

0011

Write with kill

SBW or burst

Memory write operation. L2 invalidates addressed block.

PCI write transaction.

0100

sync or tlbsync

Address only

Asserts AACK#. No other response. No PCI transaction.

0101

Read or read with no intent to cache

SBR(1)
or burst

Memory read operation.

PCI read transaction.

0110

Kill block or icbi

Address only

Asserts AACK#. L2 invalidates addressed block.

Asserts AACK#. No other response.

0111

Read with intent to modify

Burst

Memory read operation.

PCI read transaction.

1000

eieio

Address only

Asserts AACK#. No other response. No PCI transaction.

1001

Write with flush atomic,
stwcx

SBW

Memory write operation.

PCI write transaction.

1010

ecowx

SBW

Asserts AACK# and TA# if the transaction is not claimed by another 60X bus device. No PCI transaction. No other response.

1011

Reserved

Asserts AACK#. No other response. No PCI transaction.

1100

TLB invalidate

Address only

Asserts AACK#. No other response. No PCI transaction.

1101

Read atomic, lwarx

SBR or burst

Memory read operation.

PCI read transaction.

1110

External control in, eciwx

Address only

660 asserts all ones on the CPU data bus. Asserts AACK#, and TA# if the transaction is not claimed by another 60X bus device. No PCI transaction. No other response.

1111

Read with intent to modify atomic, stwcx

Burst

Memory read operation.

PCI read transaction.

Note:
1) As used in this table, SBR means Single-Beat Read, and SBW means Single-Beat Write.

Transfer types in Table 1 that have the same response are handled identically by the bridge. For example, if the address is the same, the bridge generates the same memory read transaction for transfer types 0101, 0111, 1101, and 1111.

The 660 Bridge does not generate PCI or system memory transactions in response to address-only transfers. The bridge does drive all-ones onto the CPU bus and signals TA# during an eciwx if no other CPU bus agent claims the transfer.

References in the remainder of this document to a CPU read, assume one of the transfer types in Table 1 that produce the read response from the 660 Bridge. Likewise, references to a CPU write refer to those transfer types that produce the write response.

1.3 System Response by CPU Bus Address Range

The 660 Bridge determines the target of a CPU bus master transaction based on the CPU bus address range as shown in Table 2. The acronym BCR means a (660) bridge control register.

Table 2. 660 Bridge Address Mapping of CPU Bus Transactions


CPU Bus Address

Other
Conditions


Target Transaction


Target Bus Address


Notes

0 to 2G
0000 0000h to 7FFF FFFFh

System Memory

0 to 2G
0000 0000h to 7FFF FFFFh

1., 2.

2G to 2G + 8M
8000 0000h to 807F FFFFh

Contiguous
Mode

PCI I/O Transaction, BCR Transaction, or PCI Configuration Transaction

0 to 8M
0000 0000h to 007F FFFFh

3.

Non-
Contiguous
Mode

0 to 64K
0000 0000h to 0000 FFFFh

4.

2G + 8M to 2G + 16M
8080 0000h to 80FF FFFFh

PCI Configuration
(Type 0) Transaction

PCI Configuration Space
0080 0000h to 00FF FFFFh

2G + 16M to 3G - 8M
8100 0000h to BF7F FFFFh

PCI I/O Transaction

16M to 1G - 8M
0100 0000h to 3F7F FFFFh

3G - 8M to 3G
BF80 0000h to BFFF FFFFh

BCR Transactions
and PCI Interrupt
Ack. Transactions

1G - 8M to 1G
3F80 0000h - 3FFF FFFFh

3., 6.

3G to 4G - 2M
C000 0000h to FFDF FFFFh

PCI Memory
Transaction

0 to 1G - 2M
0000 0000h to 3FDF FFFFh

4G - 2M to 4G
FFE0 0000h to FFFF FFFFh

Direct Attach ROM Read, Write, or Write
Lockout

BCR Transaction

0 to 2M
0000 0000h to 001F FFFFh
(ROM Address Space)

5.

Remote ROM

PCI Memory Transaction to I/O Bus Bridge

1G - 2M to 1G
3FE0 0000h to 3FFF FFFFh

5.

Notes:

  1. System memory can be cached. Addresses from 2G to 4G are not cacheable.
  2. Memory does not occupy the entire address space.
  3. Registers do not occupy the entire address space.
  4. In non-contiguous mode, each 4K page in the 8M CPU bus address range maps to 32 bytes in PCI I/O space.
  5. Registers and memory do not occupy the entire address space. Accesses to unoccupied addresses result in all one-bits on reads and no-ops on writes.
  6. A memory read of BFFF FFF0h generates an interrupt acknowledge transaction on the PCI bus.

1.3.1 Address Mapping for Contiguous I/O

In contiguous I/O mode, CPU addresses from 2G to 2G + 8M generate a PCI I/O cycle on the PCI bus with PCI_AD[29:00] unchanged. The low 64K of PCI I/O addresses are forwarded to the ISA bus unless claimed by a PCI agent.

Memory page protection attributes can only be assigned by 4K groups of ports, rather than by 32-port groups as in the non-contiguous mode. This is the power-on default mode. Figure 2 gives an example of contiguous I/O partitioning.

1.3.2 Address Mapping for Non-Contiguous I/O

Figure 3 shows the address mapping that the 660 Bridge performs in non-contiguous mode.



The I/O map type register (address 8000 0850h) and the bridge chip set options 1 register (index BAh) control the selection of contiguous and non-contiguous I/O. In non-contiguous mode, the 8M address space of the 60X bus is compressed into 64K of PCI address space, and the 60X CPU cannot create PCI I/O addresses from 64K to 8M.

In non-contiguous I/O mode, the 660 Bridge partitions the address space such that each 4K page is remapped into a 32-byte section of the 0 to 64K ISA port address space. Thus 60X CPU protection attributes can be assigned to any of the 4K pages. This provides a flexible mechanism to lock the I/O address space from change by user-state code. This partitioning spreads the ISA I/O address locations over 8M of CPU address space.

In non-contiguous mode, the first 32 bytes of each 4K page are mapped to a 32-byte space in the PCI address space. The remainder of the addresses in the 4K page are aliases of the same 32-byte PCI space, and are assigned the same protection attributes in the CPU.

For example, in Figure 4, 60X CPU addresses 8000 0000h to 8000 001Fh are converted to PCI I/O port 0000h through 001Fh. PCI I/O port 0020h starts in the next 4K page at 60X CPU address 8000 1000h.



1.3.3 PCI Final Address Formation

The 660 Bridge maps 60X CPU bus addresses from 2G to 4G as PCI transactions, error address register reads, or ROM reads and writes. The 660 Bridge manipulates 60X bus addresses from 2G to 4G to generate PCI addresses as follows:

1.4 CPU to Memory Transfers

The system memory address space is from 0 to 2G. Physical memory does not occupy the entire address space. When the CPU reads an unpopulated location, the 660 Bridge returns all-ones and completes the transfer normally. When the CPU writes to an unpopulated location, the Bridge signals normal transfer completion to the CPU but does not write the data to memory. The memory select error bit in the error status 1 register (bit 5 in index C1h) is set in both cases.

All CPU to memory writes are posted and can be pipelined.

The 660 Bridge supports all CPU to memory bursts, and all single-beat transfer sizes and alignments that do not cross an 8-byte boundary, which includes all memory transfers initiated by the 604 CPU.

1.4.1 LE Mode

The bridge supports all transfer sizes and alignments that the CPU can create in LE mode; however, all loads or stores must be at natural alignments in LE mode (or the PowerPC 604 will take an alignment exception). Also, load/store multiple word and load/store string word instructions are not supported in the CPU in LE mode.

1.5 CPU to PCI Transactions

Since all CPU to PCI transactions are CPU memory mapped, software must, in general, utilize the EIEIO instruction which enforces in-order execution, particularly on PCI I/O and configuration transactions. Some PCI memory operations can be sensitive to order of access also. See the 660 Bridge User's Manual.

All addresses from 2G to 4G (including ROM space) must be marked non-cacheable. See the PowerPC Reference Platform Specification. The reference design supports all PCI bus protocols during CPU to PCI transactions.

The reference design supports all CPU to PCI transfer sizes that do not cross a 4-byte boundary. The reference design also supports 8-byte CPU to PCI writes that are aligned on an 8-byte boundary. The bridge does not support CPU bursts to the PCI bus.

When the 660 Bridge decodes a CPU access as targeted for the PCI, the 660 Bridge requests the PCI bus. Once the SIO grants the PCI bus to the 660 Bridge, the bridge initiates the PCI cycle and releases the CPU bus.

CPU to PCI transactions that the PCI target retries, cause the 660 Bridge to deassert its PCI_REQ# (the Bridge follows the PCI retry protocol). The Bridge stays off of the PCI bus for two PCI clocks before reasserting PCI_REQ# (or FRAME#, if the PCI bus is idle and the PCI_GNT# to the Bridge is active).

1.5.1 CPU to PCI Read

If the CPU to PCI cycle is a read, a PCI read cycle is run. If the PCI read cycle completes, the data is passed to the CPU, and the CPU cycle is ended. If the PCI cycle is retried, the CPU cycle is retried. If a PCI master access to system memory is detected before the PCI read cycle is run, then the CPU cycle is retried (and no PCI cycle is generated).

1.5.2 CPU to PCI Write

If the CPU to PCI cycle is a write, a PCI write cycle is run. CPU to PCI I/O writes are not posted, as per the PCI Local Bus Specification version 2.1. If the PCI transaction is retried, the Bridge retries the CPU.

CPU to PCI memory writes are posted, so the CPU write cycle is ended as soon as the data is latched. If the PCI cycle is retried, the Bridge retries the cycle until it completes.

1.5.2.1 Eight-Byte Writes to the PCI (Memory and I/O)

The 660 Bridge supports 1-byte, 2-byte, 3-byte, and 4-byte transfers to and from the PCI. The 660 Bridge also supports 8-byte memory and I/O writes (writes only, not reads) to the PCI bus. This enables the use of the 604 store multiple instruction to PCI devices. When an 8-byte write to the PCI is detected, it is not posted initially. Instead, the CPU waits until the first 4-byte write occurs, then the second 4-byte write is posted. If the PCI retries on the first four byte transfer, or a PCI master access to system memory is detected before the first 4-byte transfer, then the CPU is retried. If the PCI retries on the second 4-byte transfer, then the 660 Bridge retries the PCI write.

1.5.3 CPU to PCI Memory Transactions

CPU transfers from 3G to 4G - 2M are mapped to the PCI bus as memory transactions.

1.5.4 CPU to PCI I/O Transactions

CPU transfers from 2G+16M to 3G - 8M are mapped to the PCI bus as I/O transactions. In compliance with the PCI specification, the 660 Bridge master aborts all I/O transactions that are not claimed by a PCI agent.

1.5.5 CPU to PCI Configuration Transactions

The reference design allows the CPU to generate type 0 and type 1 PCI configuration cycles. The CPU initiates a transfer to the appropriate address, the 660 Bridge decodes the cycle and generates a request to the PCI arbiter in the SIO. When the PCI bus is acquired, the 660 Bridge enables its PCI_AD drivers and drives the address onto the PCI_AD lines for one PCI clock before it asserts PCI_FRAME#. Predriving the PCI_AD lines for one clock before asserting PCI_FRAME#, allows the IDSELs to be resistively connected to the PCI_AD[31:0] bus at the system level.

The transfer size must match the capabilities of the target PCI device for configuration cycles. The reference design supports 1-, 2-, 3-, and 4-byte transfers that do not cross a 4-byte boundary, and supports doubleword aligned 8-byte writes to the PCI.

Address unmunging and data byte swapping follows the same rules as for system memory with respect to BE and LE modes of operation. Address unmunging has no effect on the CPU address lines which correspond to the IDSEL inputs of the PCI devices.

See Section for more information on PCI configuration transactions, including the IDSEL assignment.

1.5.6 CPU to PCI Interrupt Acknowledge Transaction

Reading the interrupt acknowledge address (BFFF FFF0h) causes the bridge to arbitrate for the PCI bus and then to execute a standard PCI interrupt acknowledge transaction. The system interrupt controller in the ISA bridge claims the transaction and supplies the 1-byte ISA interrupt vector. There is no physical interrupt vector BCR in the bridge. Other PCI bus masters can initiate interrupt acknowledge transactions, but this may have unpredictable effects. Also see Section , Exceptions, for more information.

1.5.7 PCI Locks and CPU Reservations

The 660 Bridge does not set PCI locks when acting (for the CPU) as the PCI busmaster. The CPU has no mechanism to initiate a PCI lock protocol.

The 660 Bridge allows PCI busmasters to lock one 32-byte cache sector (block) of system memory using the PCI_LOCK# signal. Once a PCI lock is established, the block address is saved. Subsequent accesses to that block from other PCI bus masters or from the CPU bus are retried until the lock is released.

The bridge generates a flush block (see the 660 Bridge User's Manual) snoop cycle on the CPU bus when a PCI bus master sets the PCI lock. The flush block snoop cycle causes the L1 and L2 caches to invalidate the locked block, which prevents cache hits on accesses to locked blocks. If the cache contains modified data, the PCI cycle is retried and the modified data is pushed out to memory.

The 604 does not have a bus locking function. Instead, the 604 uses the load reserve and store conditional instructions (lwarx and stwcx) to implement exclusive access. This reservation protocol is explained in The 604 User's Manual.

The 660 Bridge supports the 604 reservation protocol. The 660 Bridge takes no action on the PCI bus during a CPU request for reservation. Since he 660 Bridge broadcasts the address of all PCI to memory transactions to the CPU, the CPU can monitor PCI memory accesses. If one of the PCI to memory accesses violates the CPU reservation, then the CPU takes appropriate action.

1.6 CPU to ROM Transfers

The PowerPC Reference Platform Specification allocates the upper 8M of the 4G CPU address space as ROM space. The 660 Bridge implements a 2M ROM space from 4G - 2M to 4G. The actual ROM is a 512K AMD Flasht ROM device located at 4G - 2M. The ROM is attached to the 660 Bridge via the PCI_AD lines. This mode is required when using the Intel SIO. ROM device writes and write-protect commands are supported. See the 660 Bridge User's Manual for more information.

The ROM device attaches to the 660 Bridge by means of control lines and the PCI_AD[31:0] lines. When a CPU bus master reads from the ROM, the bridge masters a BCR transaction, during which it reads the ROM and returns the data to the CPU. CPU writes to the ROM are also forwarded to the ROM device while the write protect bit in the 660 Bridge is not set.

Although connected to the PCI_AD lines, the ROM is not a PCI agent. The ROM and the PCI agents do not interfere with each other because the ROM is under bridge control, and the bridge does not enable the ROM except during ROM cycles. The bridge accesses the ROM by means of BCR transactions. Other PCI devices cannot read or write the ROM because they cannot generate BCR transactions.

1.6.1 CPU to ROM Read

At power-on, the 604 CPU comes up in BE Mode with the L1 cache disabled, and begins fetching instructions (using 8-byte single beat reads) at address FFF0 0100 (4G - 1M + 100h). The 660 Bridge also resets to BE mode.

The system ROM address space is from 4G - 2M to 4G. Since the size of the installed ROM is less than 2M (512K), it is aliased every 512K throughout the ROM space. Location 0 of the 512K ROM is mapped to CPU bus addresses 4G - 2M, 4G - 1.5M, 4G - 1M, and 4G - 0.5M.

The ROM is located on the PCI bus physically but not logically, and is 8 bits wide. This requires the 660 Bridge to decode ROM address, run 8 cycles to PCI bus without activating FRAME, accumulate the 8 single bytes of read data into an 8-byte group and generate a TA# and an AACK# to complete the cycle. The CPU can also read the ROM using bursts, but it receives the same 2 instructions from the ROM on each beat of the burst. For more information, see the 660 Bridge User's Manual.

Software can lock out the ROM using a 660 Bridge BCR. When the CPU writes to any ROM location while the ROM is locked out, the bridge signals normal transfer completion to the CPU but does not write the data to the ROM. The CPU bus write to the locked flash (ROM) bit in the 660 Bridge error status 2 register (bit 0 in index C5h) is set.

1.6.2 CPU to ROM Write

Writing to the (flash) ROM is another very specialized cycle. Only one address (FFFF FFF0) is used for writing data to ROM. The ROM address and data are both encoded into four bytes and written using a 4-byte write transfer. Eight byte and burst transfers to the ROM are not supported. See the 660 Bridge User's Manual.

Writes to ROM may be performed in either BE or LE mode. The data byte swapper in the 660 Bridge is gated according to endian mode. Writes in BE mode occur in natural sequence. However, address unmunging in LE mode has no effect on the cycle because the addresses are ignored. Therefore, software must reverse the byte significance of the data and address encoded into the store instructions for LE mode writes to the ROM.

1.6.2.1 ROM Write Protection

ROM write protection must be implemented within software. Port FFFF FFF1 can be used to lock out all ROM writes. Writing any data to this port address locks out all ROM writes until the 660 Bridge is hardware reset. In addition, Flash ROM itself has means to permanently lock out changing certain sectors by writing control sequences. Consult the Flash ROM Specification for details.

1.6.3 CPU to BCR Transfers

The 660 Bridge can be extensively programmed by means of the Bridge Control Registers (BCR). See The 660 Bridge User's Manual for a description of the operation and programming of the 660 Bridge BCRs.

1.7 CPU Card Interface (CPU Slot)

The CPU card interface (slot) consists of the functional, physical, and electrical interface between the reference design motherboard and the reference design CPU card. The functional interface is described in Section 1.7.1, Signal Descriptions, and the physical and electrical interfaces are described in the remainder of Section 1.7. There are two CPU card slots. At least one slot must contain a CPU card containing a 604 CPU. Since the 660 Bridge parks the CPU bus on CPU slot 1, uniprocessor systems can achieve better performance by installing the CPU in that slot.

Information and constraints given for CPU cards also apply to compliant CPU busmaster cards that are not 604 CPU cards. The terms CPU and CPU busmaster are often used interchangeably in this document.

1.7.1 CPU Slot Signal Descriptions

Table 3 describes the signals used to interface the CPU cards to the motherboard. These signals are carried by the main CPU connector associated with each CPU slot. In Table 3, signals are labeled as inputs (I) or outputs (O) as viewed from the motherboard. Thus DBG# is shown as an (O) output because it is an output of the motherboard. For more information on the CPU bus signals, see the 604 and 660 Bridge User's Manuals.

Table 3. CPU Slot Signal Descriptions

Signal

I/O

Level

Note

Description

604 Bus Signals

A[0:31]

I/O

1

CPU Address Bus
Represents the 32 bit physical address of the current transaction. The address is valid from the bus cycle, in which TS# is asserted through the bus cycle in which AACK# is asserted. The address bus is driven by the busmaster.

Motherboard: These signals are bussed (connected) to each CPU slot and the L2 slot.

CPU Card:

AACK#

I/O

1

Address Acknowledge
Assertion Indicates completion of the current address tenure.

Motherboard: This signal is bussed (connected) to each CPU slot.

CPU Card:

ABB#

I/O

1

Address Bus Busy
Indicates that the address bus is in use and cannot be driven (or claimed) by another busmaster, regardless of state of BG# for that master.

Motherboard: 10K pullup. This signal is bussed (connected) to each CPU slot.

CPU Card:

AP[0:3]

I/O

1

Address Bus Parity
Indicates that the parity of each byte of the address bus is odd (1) or even (0). Even parity generates an address parity error in a 604. Driven by the busmaster.

Motherboard: 10K pullup. The motherboard does not support CPU address bus parity generation or checking on transactions to memory or I/O. These signals are pulled up to force odd parity and avoid CPU address parity errors if no CPU is mastering the bus. These signals are bussed (connected) to each CPU slot.

CPU Card:

ARTRY#

I/O

1

Address Retry
Indicates that the current CPU address tenure is to be terminated and retried later. If the data tenure is already in progress, it must be aborted by the master and retried later. This signal allows other busmasters to backoff the current busmaster in order to main tain coherency.

Motherboard: 10K pullup. This signal is bussed (connected) to each CPU slot.

CPU Card:

BG#

O

1

Bus Grant
Indicates that the requesting busmaster can assume ownership of the address bus (with proper qualification by ABB# and ARTRY#).

Motherboard: 10k pullup. There is an individual BG# for each CPU slot.

CPU Card:

BR#

I

1

Bus Request
Asserted by a busmaster to request mastership of the address bus.

Motherboard: 10K pullup. There is an individual BR# for each CPU slot.

CPU Card:

WT#

I

1

Write Thru
Indicates that the single beat transfer currently on the bus is write through. Second level caches should not post this write operation

Motherboard: 10K pullup. This signal is bussed (connected) to each CPU slot.

CPU Card:

CI#

I

1

Cache Inhibit
Indicates that the single beat transfer currently on the bus is not being cached.

Motherboard: 10K pullup. This signal is bussed (connected) to each CPU slot.

CPU Card:

DBB#

I/O

1

Data Bus Busy
Indicates that the data bus is busy. Used to qualify DBG# for ownership of the data bus.

Motherboard: 10K pullup. This signal is bussed (connected) to each CPU slot. If fast-L2/data streaming mode is desired, resistor R232 must be removed to avoid bus contention between two processors.

CPU Card: CPU cards designed to run in fast-L2/data streaming mode may require special wiring of this signal to place the CPU in this mode and to avoid bus contention on this signal. (For a 604 CPU card without a serial L2, fast-L2/data streaming mode can be implemented by adding a 10K pullup to DBB#.) Also see DRTRY# and DBG#.

DBG#

O

1

Data Bus Grant
Indicates that the CPU or other busmaster can, with proper qualification, assume mas tership of the data bus.

Motherboard: This signal is bussed (connected) to each CPU slot.

CPU Card: CPU cards designed to run in fast-L2/data streaming mode may require special wiring of this signal. Also see DRTRY# and DBB#.

DRTRY#

O

1

Data Retry
Indicates that the busmaster must invalidate the data from the previous read data beat.

Motherboard: 10K pullup. Fast-L2/data streaming mode is supported by the 660 Bridge. The mother board is wired so that it defaults to DRTRY# mode. This signal is bussed (connected) to each CPU slot.

CPU Card: CPU cards designed to run in fast-L2/data streaming mode may require special wiring of this signal to place the CPU in this mode and avoid bus contention on this signal. (For a 604 CPU card without a serial L2, fast-L2/data streaming mode can be implemented by connecting DRTRY# on the 604 to HRESET# on the CPU card slot. Leave #DRTRY on the CPU card slot unconnected and floating.) Also see DBB# and DBG#.

DH[0:31]

I/O

1

Data Bus High
Most significant 4 bytes of the data bus.

Data Bus Signals Byte Lane
DH[0:7] 0 (MSB)
DH[8:15] 1
DH[16:23] 2
DH[24:31] 3

Motherboard: These signals are bussed (connected) to each CPU slot and the L2 slot.

CPU Card:

DL[0:31]

I/O

1

Data Bus Low
Least significant 4 bytes of the data bus.

Data Bus Signals Byte Lane
DL[0:7] 4
DL[8:15] 5
DL[16:23] 6
DL[24:31] 7 (LSB)

Motherboard: These signals are bussed (connected) to each CPU slot and the L2 slot.

CPU Card:

DP[0:7]

I/O

1

Data Bus Parity
Indicates the parity by byte of the CPU data bus (1 = odd, 0 = even ).

Parity Bit Byte Lane Parity Bit Byte Lane
DP[0] 0 DP[4] 4
DP[1] 1 DP[5] 5
DP[2] 2 DP[6] 6
DP[3] 3 DP[7] 7

Motherboard: The 660 Bridge checks data bus parity (using DP[0:7], DH[0:31], and DL[0:31] ) during CPU writes. During CPU read transfers, the 660 Bridge drives DP[0:7] with the (odd) parity information stored in the DRAM (if there is an L2 hit, the L2 supplies the stored parity information). These signals are bussed (connected) to each CPU slot and to the L2 slot.
CPU Card:

DPE#

I

1

Data Parity Error
Indicates that the busmaster has detected a parity error.

Motherboard: During CPU to memory reads that result in L2 hits, the 660 Bridge moni tors DPE# to detect CPU data bus parity errors. This signal is bussed (connected) to each CPU slot.

CPU Card: If parity is not supported by the CPU card, this signal should be pulled up by a 10K ohm resistor on the CPU card.

GBL#

I/O

1

Global
Indicates that the current transaction should be snooped by other busmasters.

Motherboard: This signal is bussed (connected) to each CPU slot.

CPU Card:

SHD#

I/O

1

Shared
Indicates a cache hit on a shared block. If asserted with ARTRY#, then the asserting busmaster will perform a snoop push of modified data.

Motherboard: 10K pullup. This signal is bussed (connected) to each CPU slot.

CPU Card:

TA#

O

1

Transfer Acknowledge
Asserted by the target of the CPU transfer to indicate that the current data beat has been accepted. For each CPU clock that TA# is asserted, a data beat completes. For a single-beat (8-byte) data tenure, TA# is only one clock. For a four-beat (32-byte) burst, the data tenure competes on the fourth cycle in which TA# is asserted.

Motherboard: 10K pullup. This signal is bussed (connected) to each CPU slot.

CPU Card:

TBST#

I/O

1

Transfer Burst
Indicates that a burst (32-byte) transfer is in progress.

Motherboard: 10k pullup. This signal is bussed (connected) to each CPU slot.

CPU Card:

TEA#

O

1

Transfer Error Acknowledge
Indicates that an exception has occurred and that the CPU should take a machine check exception. Assertion of TEA# terminates the current data beat and tenure.

Motherboard: 10K pullup, TEA# can be masked on the motherboard via the bridge chipset control registers. This signal is bussed (connected) to each CPU slot.

CPU Card:

TS#

I/O

1

Transfer Start
Indicates the start of a memory or memory mapped I/O transaction by a busmaster and that the address bus and address transfer attributes are valid.

Motherboard: 10k pullup. This signal is bussed (connected) to each CPU slot.

CPU Card:

TSIZ[0:2]

I/O

1

Transfer Size
This 3 bit transfer size encoding, in conjunction with TBST#, indicates the size of the current CPU data bus beat.

TBST# TSIZ(0..2) Transfer Size
0 010 Burst ( 32 bytes )
1 000 8 bytes
1 001 1 byte
1 010 2 bytes
1 011 3 bytes
1 100 4 bytes
1 101 5 bytes
1 110 6 bytes
1 111 7 bytes

Motherboard: These signals are bussed (connected) to each CPU slot.

CPU Card:

TT[0:4]

I/O

1

Transfer Type
5 bit encoded transfer type of the current bus transaction. See the 604 User's Manual and Table 1 for more information.

Motherboard: See the 660 Bridge User's Manual for more information. These signals are bussed (connected) to each CPU slot.

CPU Card:

XATS#

I

1

Extended Address Transfer Start
Indicates the start of a direct store operation on the bus.

Motherboard: 10K pullup. Unsupported operation on the motherboard and will result in an exception (TEA# asserted). This signal is bussed (connected) to each CPU slot.

CPU Card: Do not assert XATS#.

L2 Controller

L2_BR#

I

2

L2 Bus Request
Indicates that the L2 cache controller or second master is requesting mastership of the address bus.

Motherboard: No connect. The motherboard does not support a second master in the same CPU card slot. For each CPU slot, BR# is hardwired to the arbiter, and L2_BR# is unconnected. The L2_BR# for the slot can be connected to the BR# for the slot by installing a 0W resistor (R76 for CPU slot 1 and R20 for CPU slot 2).

CPU Card: CPU cards must contain only one busmaster. This busmaster must use the primary bus request for the slot, BR#.

L2_BG#

O

2

L2 Bus Grant
Indicates the L2 cache controller or second master can assume ownership of the ad dress bus (given proper qualification by ABB# and ARTRY#).

Motherboard: No connect. The motherboard does not support a second master in the same card slot. For each CPU slot, BG# is hardwired to the arbiter, and L2_BG# is un connected. The L2_BG# for the slot can be connected to the BG# for the slot by instal ling a 0W resistor (R66 for CPU slot 1 and R83 for CPU slot 2).

CPU Card: CPU cards must contain only one busmaster. This busmaster must use the primary bus grant for the slot, BG#.

L2_CLAIM#

I

2

L2 CPU Bus Claim
Indicates that a CPU bus target (e.g., an L2 cache controller) is claiming the CPU bus transfer and will supply the data, the address bus transfer signals, and the data bus transfer signals, as appropriate, for a target. The 660 Bridge aborts the memory control ler cycle and tristates its AACK#, TA#, TEA#, and CPU data bus drivers.

Motherboard: 10K pullup. This signal is bussed (connected) to each CPU slot.

CPU Card: The address range in which this signal can be asserted depends on the state of the 660 Bridge. While the 660 Bridge internal L2 is enabled, L2_CLAIM# can be asserted for accesses from the top of memory up to 2G (e.g., if 8M of DRAM is installed, L2_CLAIM# can be asserted from 8M to 2G). While the 660 Bridge internal L2 is dis abled, L2_CLAIM# can be asserted for accesses from 0 to 2G.

L2_CLR#

O

2

L2 Cache Clear
Indicates that all of the L2 tags are to be invalidated or cleared.

Motherboard: This signal is shared with the imbedded L2 cache in the 660 Bridge. An active low pulse is generated on this signal for 1 CPU clock whenever a CPU bus write is performed to 8000 0814h. This signal is bussed (connected) to each CPU slot and the L2 slot.

CPU Card:

L2_INH#

O

4

L2 Cache Miss Inhibit
Indicates that the L2 cache should be inhibited from updating the SRAM on all L2 mis ses. This allows the L2 cache to retain its contents during memory accesses, while maintaining coherency (snooping and tag updates continue to occur).

Motherboard: This signal is not shared with the imbedded L2 cache on the 660 Bridge. It can only be accessed when the 660 Bridge is in external register mode (see Bridge Chip Set Options 3 BCR in the 660 User's Manual). In this mode the signal follows the value written to System Control BCR 0 (address 8000 081Ch bit 7. This signal is bussed (connected) to each CPU slot.

CPU Card:

L2_FLUSH#

O

4

L2 Cache Flush
Indicates that the L2 in this slot should write all modified lines to memory and mark all lines as invalid.

Motherboard: This signal is not shared with the imbedded L2 cache on the 660 Bridge. It can only be accessed when the 660 Bridge is in external register mode (see Bridge Chip Set Options 2 BCR in the 660 User's Manual). In this mode, the signal follows the value written to System Control BCR 0 (address 8000 081Ch bit 4). This signal is bussed (connected) to each CPU slot.
CPU Card:

L2_DISABLE#

O

4

L2 Cache Disable
Indicates that the L2 in this slot should be disabled. This signal can be used by the sys tem to inhibit all operations of the cache without invalidating the data in the cache. It is the responsibility of the system to maintain coherency in this case.

Motherboard: This signal is not shared with the imbedded L2 cache on the 660 Bridge. It can only be accessed when the 660 Bridge is in external register mode (see Bridge Chip Set Options 2 BCR in the 660 User's Manual). In this mode the signal follows the value written to System Control BCR 0 address 8000 081Ch bit 6. This signal is bussed (connected) to each CPU slot.

CPU Card:

CONFIG#

O

4

Configuration Enable
A low on this signal enables a serial L2 cache to decode ranges of CPU bus addresses as configuration transfers. These transfers will not be forwarded to the system bus.

Motherboard: This signal is a programmable I/O and follows the value of the L2 Control Register located at 8000 086Bh bit 2. This signal is bussed (connected) to each CPU slot.

CPU Card: These transfers should be intercepted by the serial cache and not forwarded to the system bus. When two CPU cards with serial caches are installed, software must prevent CPU_x from performing unintended configuration cycles while CPU_y is using the signal to configure its serial cache. Software must sequence L2 configuration cycles.

L2_WT#

O

4

L2 Cache Write Thru Only
A low Indicates that the L2 cache in this slot should operate in write thru mode only.

Motherboard: This signal is a programmable I/O and follows the value of the L2 Control Register located at 8000 086Bh bit 1. This signal is bussed (connected) to each CPU slot.

CPU Card:

L2_AACK_EN

O

4

L2 AACK Bi-Directional enable
A high on this signal enables the L2 in this slot to drive AACK#. A low indicates that the L2 should treat this signal as input only.

Motherboard: Always pulled up to 3.3v via a 10 K pullup. On the motherboard, the pul lup resistors (R19 for slot 1 and R7 for slot 2) can be removed and pulldown resistors added (R12 for slot 1 and R86 for slot 2). There is an individual L2_AACK_EN for each CPU slot.

CPU card: L2 cards designed to drive AACK# must also drive L2_CLAIM# to avoid bus contention on AACK#.

Clock/Interrupts/Resets

BUS_CLK[0:2]

O

1

System Bus Clocks
Bus Clocks for the CPU bus.

Motherboard: Nominally 66MHz or 60MHz, depending on the FREQ_ID[0:3] of both slots. If no card is present, system firmware may disable these clocks via the Freeze Clock Registers at ports 8000 0860h and 8000 0862h. See Section (ELPD) for information on the operation of these ports. There are three individual BUS_CLKs for each CPU slot.

CPU Cards: To minimize skew and other clock problems, make each clock trace 3.75" +/- .1". Place exactly one load on each clock line that is used.

INT_A#

O

1,3

CPU Interrupt
Active low, level sensitive interrupt signal to the CPU.

Motherboard: Slot and CPU specific interrupt output to the CPU in the card slot. These outputs come from the MPIC device on the motherboard. There is an individual INT_A# for each CPU slot. INT_A# from CPU slot 1 connects to the INT0# out of MPIC. INT_A# from CPU slot 2 connects to the INT1# output of MPIC (see Section 22).

CPU Card:

CHECKSTOP#

I

1

Checkstop Out
Indicates that the CPU in this slot has detected a check stop condition and has ceased operation.

Motherboard: 10K pullup. There is an individual CHECKSTOP# for each CPU slot. The RISCWatch system uses this signal to monitor the status of the CPUs.

CPU Card:

MCP_A#

I/O

1,3,4

Machine Check Interrupt
Initiates a machine check interrupt to the CPU. May be asserted asynchronously.

Motherboard: 330W pullup to 3.3v. This signal is used to report system errors to the CPU. The 660 Bridge asserts this signal for two CPU clock cycles in the event of a cata strophic or unrecoverable system error. The MCP_A# signal is not processor or slot specific.

CPU Cards: The MCP_A# signal should be wire OR'd to any device on the CPU card which monitors or asserts MCP#. Devices which assert MCP_A# must use open drain outputs to avoid contention with other devices. No pullup or pulldown resistors should be connected.

SRESET_A#

O

1,3,4

Soft Reset
Asserting initiates a soft reset exception in the CPU.

Motherboard: The SRESET_A# signal is driven by logic on the motherboard which allows three methods of asserting SRESET_A#. A CPU specific soft reset can be issued by the MPIC to any individual CPU in the system. A global soft reset can be generated by writing a 0 to port 8000 0092h bit 0, or by the RISCWatch interface. There is an indi vidual SRESET_A# for each CPU slot.

CPU Card:

HRESET_A#

O

1,4

Hardware Reset
Asserting initiates a hard reset operation in the CPU.

Motherboard: The HRESET_A# signal is driven by logic on the motherboard which allows three methods of asserting HRESET_A#. A CPU specific hard reset can be is sued via the Processor Enable Register at port 8000 0871h (see Section ). A global hard reset can be generated by the power supply power good indicator or via the RISCWatch interface. There is an individual HRESET_A# for each CPU slot.

CPU Card:

HALT_A/QREQ _A

I

1,4

HALTED
Indicates that the internal clocks have been stopped on the CPU due to entering a pow er management state.

Motherboard: 10K pullup. Logic on the motherboard monitors the HALT_A/QREQ_A inputs from each CPU slot, and deasserts the RUN signal to both CPUs when they both have halted. There is an individual HALT_A/QREQ_A for each CPU slot.

CPU Card:

RUN/QACK

O

1,4

Run
When asserted, RUN/QACK indicates that the CPU must force internal clocks to run during power managed Nap mode, thus allowing bus transactions to be snooped. When deasserted, it allows the internal CPU clocks to be stopped, which will suspend CPU snooping of CPU bus operations.

Motherboard: Common to all CPU slots, RUN is used in conjunction with the HALT signal to to manage the entry of the CPUs into the power managed modes.

CPU Card:

SMI#

O

1,4

System Management Interrupt
Asynchronous interrupt that indicates that the CPU should initiate an SMI operation. Often used by the system to force the CPU out of power managed states.

Motherboard: This signal is held deasserted by the motherboard, and is bussed (con nected) to each CPU slot.

CPU Card:

TBEN

O

1

Timebase Enable
Indicates that the CPU time base counter should continue clocking.

Motherboard: 10K pullup. This signal is bussed (connected) to each CPU slot.

CPU Card:

TCK

O

1

JTAG Clock
Serial clock for JTAG interface

Motherboard: 10K pullup. This signal is connected to the RISCWatch connector pin 5, and is bussed (connected) to each CPU slot.

CPU Card: CPU cards with more than one JTAG capable device should connect this signal to all device shift clock inputs.

TRST#

O

1

JTAG TEST Reset
Resets JTAG logic.

Motherboard: Logic on the motherboard asserts TRST# during hardware reset of the CPU to ensure proper resetting of the CPU. The motherboard also asserts TRST# if the RISCWatch interface has asserted OCS_OVERRIDE (J18 pin 15). There is an individual TRST# for each CPU slot.

CPU Card: CPU cards with more than one JTAG capable device should connect this signal to all JTAG reset inputs.

TDI

O

1

JTAG Test Data Input
This is the scan string data connection to the first JTAG device input in the CPU card JTAG chain.

Motherboard: 10K pullup. There is an individual TDI for each CPU slot. The mother board connects this signal to the RISCWatch connector J18 pin 7 (SCAN_IN) via the J81 jumpers, which can be set to configure the scan chain as shown:

Install Jumper Scan Chain
J81 1-3 & 4-6 Slot 1 only
1 * * 2 2-4 & 3-5 Slot 2 only
3 * * 4 1-3, 2-4, & 5-6 Slot 1 -> Slot 2
5 * * 6 1-2, 3-5, & 4-6 Slot 2 -> Slot 1

CPU Card: CPU cards with more than one JTAG capable device should connect this signal to the first JTAG device input in the loop.

TDO

I

1

JTAG Test Data Output
This is the scan string data connection to the last JTAG device output in the loop.

Motherboard: There is an individual TDO for each CPU slot. The motherboard con nects this to the RISCWatch connector J18 pin 8 (SCAN_OUT) via the J81 jumpers, which can be used to configure the scan chain as shown for the TDI signal.

CPU Card: CPU cards with more than one JTAG capable device should connect this signal to the last JTAG device output in the loop.

TMS

I

1

JTAG Test Mode Select
JTAG interface signal

Motherboard: 10k pullup. This signal is bussed (connected) to each CPU slot and is connected to the RISCWatch connector J18 pin 4.

CPU Card: CPU cards with more than one JTAG capable device should connect this signal to the Test Mode Select input of all devices.

Configuration

DVR_MOD[0:1]

O

CPU Output Drive Mode Select
Select the capacity of the bus drivers on CPUs that support this function.

Motherboard: These signals are bussed (connected) to each CPU slot and are set by resistors on the motherboard. Normal drive mode (0,1) is the default value, and is rec ommended for the reference design.

DRV_MOD[0] DRV_MOD[1]
0 0 Disabled
0 1 Normal (Default)
1 0 Strong
1 1 Herculean

CPU Card:

FREQ_ID[0:3]

O

Frequency selection
These outputs determine the PLL settings of the CPU on the CPU card installed in the slot.

Motherboard: These signals are bussed (connected) to both CPU slots. Logic on the motherboard uses the PD[0:3] inputs to determine the best PLL configurations for the CPUs in the system (see Section , Clocking, for details).

CPU Card: These signals should be connected to the PLL[0:3] inputs of the CPU.

PD[0:3]

I

CPU ID ( and presence detect) inputs
These inputs indicate the desired operating frequency of the CPU on the CPU card.

Motherboard: 10K pullup. There is an individual PD[0:3] for each CPU slot. Logic on the motherboard uses these inputs to determine the correct operating frequency for the CPU bus and the values that will be presented to the CPUs via the FREQ_ID[0:3] out puts (see Section , Clocking, for details). These bits also indicate the absence of a CPU card if all PD bits are high and the software determines that no serial ROM configuration device is present. Also see L2_PD[0:1].
The status of PD[0:3] can be read via the L2 PD Register 8000 080Dh bits [3:0], and have the following meanings.

Bit 0 Bit 1 Bit 2 Bit 3 Slot contains
0 0 0 0 No CPU. This is an L2-only card.
0 0 0 1 100 MHz CPU
0 0 1 0 120 MHz CPU
0 0 1 1 132 MHz CPU

0 1 0 0 150 MHz CPU
0 1 0 1 167 MHz CPU
0 1 1 0 180 MHz CPU
0 1 1 1 Reserved

1 0 0 0 Reserved
1 0 0 1 Reserved
1 0 1 0 Reserved
1 0 1 1 Reserved

1 1 0 0 Reserved
1 1 0 1 Reserved
1 1 1 0 Reserved
1 1 1 1 No CPU Card Present or Serial ROM Present *

* If bits 3:0 of port 80D are 1, then either no L2 card is present or a serial ROM with configuration information is present. The motherboard can read the serial ROM by driv ing PD[0] with the serial clock and reading or writing data on PD[1]. This interface is controlled via the Serial ROM control Register 8000 0868h.

CPU Card: Use a 100W, 10% pulldown resistor on those lines that are to be set to 0. Do not connect lines which are to be set to 1.

L2_PD[0:1]

I/0

L2 Presence Detect 0 and 1
As inputs, these signals tell the motherboard if an L2 cache is present on the CPU card. They can also be used as I/O pins to read a serial ROM that can be attached to L2_PD[0:1], and which contains configuration information about the card in the slot. Also see Section , CPU Card.

Motherboard: 10K pullup. There is an individual L2_PD[0:1] for each CPU slot. The status of L2_PD[0:1] can be read via the CPU 1 PD Register 8000 0866h bits 4, and 5, and CPU 2 PD Register 8000 0867h bits 4 and 5. These bits have the following encod ing:
bit 4 bit 5 slot contains
0 0 Reserved
1 0 TBD
0 1 TBD
1 1 No Cache or Serial ROM present *
* If bits 5:0 of port 866 or port 867 are 1, then either no board is present or a serial ROM with configuration information is present. The motherboard can read the serial ROM by driving L2_PD[0] with the serial clock and reading or writing data on L2_PD[1]. This interface is controlled via the serial ROM control register 8000 0868h.

CPU Card: A 100W, 10% resistor should be used when required to to pull down these signals. The presence and type of SLC can be determined by CPU by using the CON FIG# signal (SLC information can not be determined by examining the PD bits).

GND

Logic Ground

Motherboard: These are the digital logic ground of the motherboard. They provide the return path for all power supplied to the CPU card.

VCC5

+5.0v supply

Motherboard: These pins are connected to the +5v pins of the motherboard power connector and to the 5v logic on the motherboard.

3.3/3.6V

+3..3/3.6v supply pins
Can be either 3.3v or 3.6v depending on system implementation.

Motherboard: these pins are connected to a 3.6v linearly regulated supply on the mo therboard.

CPU card: Do not connect devices that cannot tolerate 3.6v supply tolerance.

3.3V

3.3v supply

Motherboard: These pins are connected to the 3.3v pins of the motherboard power supply connector and to the 3.3v logic on the motherboard.

Notes:

1) Refer to the PowerPC 604 Users Manual and The IBM27-82660 PowerPC to PCI Bridge User's Manual for the details of definitions and timing of the signal. These signals conform to the function defined in these specifications.

2) Refer to The IBM27-82660 PowerPC to PCI Bridge User's Manual for the details of definitions and timing of the signal. These signals conform to the function defined in this specifications.

3) Some signal names are shown as SIG_A. The initial configuration of the reference design included support for 2 CPUs on each CPU card. Thus SIG_A was associated with the first CPU on a particular card, and SIG_B was associated with the second CPU. Signals associated with the second CPU flow over the auxiliary CPU connector. The second CPU on each CPU card and the auxiliary connector are not supported by the reference design at this time.

4) These signals may be asserted asynchronously by the motherboard logic.

5) These signals are unsupported on the CPU slot: APE#, TC[0:3], DBDIS#, CSE[0:1], RSRV#, and DBWO#.

1.7.2 CPU Slot DC Characteristics

Table 4. CPU Slot DC Characteristics

Symbol

Parameter

Min

Max

Unit

VIH

Input High Voltage

2.0

3.78

v1

VIL

Input Low Voltage

0.0

0.8

v

IIH

Input High Current

100

uA

IIH

Input Low Current

100

uA

VOH

Output High Voltage

2.4

3.78

v1

VOL

Output Low Voltage

0.0

0.4

v

IOH

Output High Current

1.0

mA

IOL

Output Low Current

1.0

mA

ITS

Tristate Leakage Current

100

uA

CF

Signal Pin Capacitance

20

pF

Note:

1) All devices on CPU Bus must be 5.0v tolerant.

2) These specifications are for the CPU slot envelope. They describe the resources that are supplied to the CPU card by the system, and the constraints placed on the CPU card by the system.

1.7.3 CPU Slot AC Timing

Table 5. CPU Slot AC Timing (5)

Parameter

CPU Bus Signal

60 MHz (1)

66MHz (1)

Unit

min

Max

min

Max

Clock Period

BUS_CLK(n)

16.6

15

nS

Clock Duty Cycle

BUS_CLK(n)

40

60

40

60

%

Clock Skew (2)

BUS_CLK(n)

.75

.75

nS

Clock Trace Length

BUS_CLK(n)

3.65

3.85

3.65

3.85

inch (3)

Clock Net Loading

BUS_CLK(n)

10

10

pF (3)

Trace Impedance

All Critical Nets (4)

63

82.5

65

83

Ohms

Critical Signal Trace Length

All Critical Nets (4)

3

3

inch

Notes:

1 CPU bus speed is determined by the capabilities of the 660 Bridge and the installed CPU cards (see Section for determination of bus speed).

2 This is the (supplied) maximum skew between BUS_CLK(n) and any of the other devices on the CPU bus (given that each CPU card meets the requirements of note 3).

3 Wire BUS_CLK(n) point to point between connector and load. Allow a maximum of one load per clock net. Make these nets exactly the trace length specified to match other clock nets on CPU bus, or adjust the trace length to adjust the clock timing with respect to the other devices on the CPU bus.

4 Critical nets should be wired point to point on internal planes. These nets include:
TS#, TA#, TBST#, TSIZ[0:2], TT[0:4], DPE#, ABB#, AACK#, ARTRY#, DRTRY#, DBB#, DBG#, BG#, BR#, GBL#, SHD#, DP[0:7], A[0:31], DL[0:31], DH[0:31]. See Section 1.10 for information on the mo therboard implementation of these nets.

5 These specifications are for the CPU slot envelope. They describe the resources that are supplied to the CPU card by the system, and the constraints placed on the CPU card by the system.

1.7.4 CPU Slot Power Supplies

Table 6 shows the power supplies that are available to each individual CPU card. Total power available for all CPU cards is also shown.

Table 6. CPU Slot Power Supplies (2)

Voltage

Regulation

Maximum Current
Per CPU Slot

Maximum Current
Total, Both Slots

+ 5v

+/-5%

2.5 A

5 A

+3.3v

+3.6v / -3.0v

7.5 A

15 A

+3.6v

+/-5%

1 A

2 A

Note:

1. Combined power consumption must be less than 25W per slot.

2. These specifications are for the CPU slot envelope. They describe the resources that are supplied to the CPU card by the system, and the constraints placed on the CPU card by the system.

1.7.5 CPU Slot Thermal Envelope

The reference design CPU card is defined to dissipate a maximum of 25W. The reference design is intended for a broad range of applications. The particular physical implementation of the motherboard, the CPU card, and the enclosure allows the designers to supply air to the CPU card at an average of 200 linear feet per minute. This requires that the ambient temperature, as measured at the leading edge of the card, be maintained at a maximum of 345C in order to adequately cool the CPU card. Figure 5 shows the airflow direction in the reference system.

The information presented here is intended only for reference, and is not presented as a solution to the thermal challenges of any particular application. Conduct independent thermal design, analysis, and testing of the particular system implementation.

1.7.6 CPU Slot Card Connector

The CPU card connector is a 2x128 pin DIMM type connector with 1 mm pin spacing. Figure 6 shows an end view of the connector. Figure 6 shows the connector pin assignments.

Table 7. Main CPU Slot Connector Pin Assignments

Pin No.

Signal Name

1

GND

2

DVR MOD 0

3

PD0

4

PD2

5

GND

6

DH30

7

3.3/3.6V

8

DH28

9

DH26

10

DH24

11

GND

12

DH22

13

DH20

14

DH19

15

DH17

16

DH15

17

GND

18

3.3/3.6V

19

DH12

20

DH11

21

DH9

22

DH7

23

GND

24

DH5

25

DH3

26

DH2

27

DH0

28

GND

29

3.3V

30

3.3V

31

DL28

32

DL26

33

DL24

34

GND

35

DL22

36

3.3V

37

DL20

38

DL18

39

DL16

40

DL14

41

GND

42

3.3V

Pin No.

Signal Name

43

DL12

44

DL11

45

DL9

46

DL7

47

GND

48

3.3V

49

DL4

50

DL3

51

DL1

52

DL0

53

DP5

54

3.3V

55

3.3V

56

DP2

57

DP0

58

GND

59

BUS_CLK 2

60

GND

61

A29

62

A28

63

A26

64

A25

65

A23

66

A21

67

GND

68

DRTRY#

69

GND

70

A19

71

3.3V

72

A17

73

GND

74

A13

75

A12

76

A11

77

A9

78

GND

79

A7

80

A5

81

A3

82

3.3V

83

A0

84

3.3V

Pin No.

Signal Name

85

GND

86

AACK#

87

GND

88

DBG#

89

TA#

90

DBB#

91

TEA#

92

5.0V

93

GND

94

XATS#

95

WT#

96

TT1

97

ARTRY#

98

SHD#

99

GND

100

5.0V

101

ABB#

102

5.0V

103

TT2

104

GND

105

TCK

106

TDI

107

SRESET_A#

108

L2_AACK_EN #

109

CONFIG#

110

GND

111

TRST#

112

5.0V

113

HALT_A / QREQ_A

114

L2_WT#

115

RUN / QACK

116

GND

117

5.0V

118

AP0

119

GND

120

AP1

121

VREF

122

L2_BR#

123

L2_CLAIM#

124

L2_CLR#

Pin No.

Signal Name

125

PWR_DWN

126

FREQ_ID0

127

FREQ_ID2

128

GND

129

DVRMOD1

130

GND

131

PD1

132

PD3

133

DH31

134

DH29

135

GND

136

DH27

137

DH25

138

DH23

139

3.3/3.6V

140

DH21

141

GND

142

DH18

143

DH16

144

DH14

145

DH13

146

3.3/3.6V

147

GND

148

DH10

149

DH8

150

DH6

151

3.3V

152

DH4

153

GND

154

DH1

155

DL31

156

DL30

157

DL29

158

GND

159

DL27

160

DL25

161

DL23

162

3.3V

Pin No.

Signal Name

163

DL21

164

GND

165

DL19

166

DL17

167

DL15

168

DL13

169

3.3V

170

3.3V

171

GND

172

DL10

173

DL8

174

DL6

175

3.3V

176

DL5

177

GND

178

DL2

179

DP7

180

DP6

181

DP4

182

3.3V

183

DP3

184

DP1

185

A31

186

A30

187

GND

188

BUS_CLK 0

189

GND

190

A27

191

3.3V

192

A24

193

A22

194

GND

195

BUS_CLK 1

196

GND

197

3.3V

198

A20

199

A18

200

A16

Pin No.

Signal Name

201

A15

202

A14

203

GND

204

3.3V

205

A10

206

A8

207

A6

208

A4

209

GND

210

A2

211

A1

212

GND

213

3.3V

214

BG#

215

GBL#

216

TSIZ0

217

3.3V

218

GND

219

TSIZ2

220

TSIZ1

221

TS#

222

TT3

223

TMS

224

GND

225

5.0V

226

TT0

227

CHECKSTOP#

228

TT4

229

CI#

230

GND

231

BR#

232

TBST#

233

DPE#

234

TDO

235

HRESET A#

236

Reserved

237

GND

238

5.0V

Pin No.

Signal Name

239

SMI#

240

INT A#

241

L2_PD0

242

GND

243

MCP A#

244

L2_PD1

245

5.0V

246

GND

247

TBEN

Pin No.

Signal Name

248

AP2

249

AP3

250

L2_FLUSH#

251

L2_BG#

252

L2_INH#

253

L2_DISABLE#

254

FREQ_ID1

255

GND

256

FREQ_ID3

1.7.7 Auxiliary CPU Slot Connectors

The auxiliary CPU connectors J15 and J16 are 24 pin headers which provide additional power and sideband signals for supporting a second CPU on the CPU cards. The headers are 2x12 pin on .100 in. centers. J15 is located near and supports CPU slot P1 (J8), and J16 is located near and supports CPU slot P2 (J9).

This interface is for evaluation purposes only, and is not a supported part of the reference design. These connectors may not be implemented on the motherboard or the CPU cards. Figure 7 shows the auxiliary connector, and Table 8 shows the pinout.

Table 8. Aux CPU Slot Connector Pinout

Pin No.

Signal Name

1

SRESET_B#

2

GND

3

HRESERT_B#

4

GND

5

HALT_B/QREQ_B

6

GND

7

MCP_B#

8

GND

9

INT_B#

10

GND

11

+3.3V

12

+3.3V

Pin No.

Signal Name

13

+3.3V

14

GND

15

+3.3V

16

+3.3V

17

+3.3V

18

GND

19

+3.3V

20

+3.3V

21

+3.3V

22

GND

23

Reserved

24

+3.3V

1.8 L2 Tag/SRAM Interface (L2 Slot)

There are several types of level 2 (L2) caches that can be used with the reference design. A serial (inline) L2 cache can be added to the CPU card to provide caching and bus traffic filtering for the CPUs. A CPU busmastering look-aside L2 can be installed in one of the CPU card slots to provide write-thru or write-back caching. The L2 card described in this section is installed in the L2 tag/SRAM slot of the reference design motherboard. In conjunction with the L2 cache controller located in the 660 Bridge, the tag/SRAM card provides unified, write-thru, direct-mapped, look-aside level 2 caching for the system 604 bus over the address range from 0 to 1G of the CPU memory space.

The tag/SRAM card provides the tagRAM and SRAM components of the L2. By changing the cards in the slot, the cache can be changed from no cache to an asynchronous cache, to a synchronous cache type. The size of the cache can be 256K, 512K, or 1MB. The size and type of the cache are sensed by the firmware through four presence detect bits defined on the interface.

The size of the tagRAM and data SRAM in the L2 depends on the devices and configuration of the tag/SRAM card. Since the different sizes and configurations are transparent to the motherboard, the L2 size can be changed without changing system configuration. The type (synchronous or asynchronous) of the SRAM and tagRAM affect the number of clock cycles required to access the cache, so the tag/SRAM type may require the firmware to reconfigure the 660 Bridge L2 controller.

The L2 supplies data to the CPU bus on read hits and snarfs the data (updates the SRAM data while the memory controller is accessing DRAM memory) on read/write misses. It snoops PCI to memory transactions. Typical synchronous SRAM read performance with 9ns SRAM is 3-1-1-1, followed by -2-1-1-1 on pipelined reads. Typical asynchronous SRAM read performance with 15ns SRAM is 3-2-2-2, followed by -3-2-2-2 on pipelined reads.

For more information on the operation, capabilities, and configuration of the L2 cache, see the 660 Bridge User's Manual. For information on the particular tag/SRAM card (if any) supplied with the reference design, see the appropriate data sheet in Section 22.

1.8.1 L2 Slot ID ROM

The L2 card ID ROM is a serial EEPROM device which can be installed on certain proposed L2 cards. It is intended to allow systems to support caches which span the full range of performance and features. The ID ROM contains information which may be used by the POST to fully test the cache SIMM. It also contains timing information for the SIMM. This information is specified by the SIMM designer and placed in the EEPROM. At power on, system performance can be tuned for best performance, given the exact capabilities of the SIMM. Diagnostics may test the entire tag/SRAM for verification of its function. In the event that a given card is not a supported configuration for the system in which it is installed, the cache may be disabled without any adverse side effects (e.g., placing a writethrough-only SIMM in a system which requires writeback capability).

The EEPROM contains 128 bits which are organized into16 bytes. Functional capabilities and diagnostic information are typically defined by bytes 0:3. Timing information is contained in bytes 4:7. Bytes 8:15 are reserved for future use. The L2 card ID ROM contains the fields found in Table 9. Note that ID ROM signals are numbered in big endian fashion.

Table 9. L2 Cache ID ROM

Byte
Addr

Bit
Field

Parameter

Description

Units

0

0

PD

Presence detect. Programmed to zero. The serial data pin should be pulled-up on the CPU motherboard so that cacheless systems return a one when attempting to access the EEPROM.

0

1

Parity

Parity detect. Identifies that a cache module implements parity RAM.

0

2

Copyback

Copyback capable. Identifies that a cache module sup ports copyback by implementing a dirty RAM and having a tag that can drive the appropriate address bits onto the bus.

0

3

Cache Type

Set if the cache module contains an onboard cache con troller. Reset if cache timing is performed by the mother board.

0

4

Reserved

0

5:7

REV

Revision number

1

0:4

n

Cache size=2(n+1). Identifies the size of the cache. Used to determine the number of tag entries. This information is helpful for diagnostic testing of the tag RAM.

Bytes

1

5:7

p

Cache line size=2(p+1). Used to determine the number of tag entries. This information is helpful for diagnostic test ing of the tag RAM.

Bytes

2:3

0:7
0:7

ID

Two character ASCII encoding for manufacturer identifica tion.

4

0:1

Tag Type

Tag RAM types as follows:
0 - Asynchronous tag RAM
1 - Synchronous tag RAM
2 - Reserved
3 - Reserved.

4

2

Reserved

4

3:7

Tag Speed

Binary value encoding the tag RAM match time.

NS

5

0:1

SRAM Type

SRAM pipeline latency as follows:
0 - Asynchronous RAM
1 - Synchronous burst RAM
2 - Synchronous pipeline burst RAM
3 - Reserved.

5

2

Reserved

5

3:7

SRAM Speed

Binary value encoding the SRAM read data access time.

NS

6

0:7

FMAX

Synchronous RAM maximum clock frequency.

MHz

7

0:1

SRAM
Voltage

SRAM signalling voltage as follows:
0 - 5v signals
1 - 3.3v signals
2 - 2.5v signals
3 - Reserved.

7

2:3

Tag Voltage

Tag RAM signalling voltage as follows:
0 - 5v signals
1 - 3.3v signals
2 - 2.5v signals
3 - Reserved.

7

4

Reserved

7

5:7

Data Type

Identifier for implementation dependent data.

8:15

Data Field

Implementation dependent data. For example, this field could contain a heavily compressed low-res JPEG image of the design team.

1.8.1.1 L2 Cache ID ROM Signaling Interface

The serial communication protocol used by the L2 cache ID ROM is defined as follows. One start bit followed by an eight bit control byte then a returned eight bits of data. The control byte consists of a two bit read command (1,0), a four bit address, and two don't care bits. The four bit address selects which of the 16 bytes to read. The entire sequence is shown in Figure 8 and Table 10.

Table 10. Serial Communications Protocol Sequence

Start

1

0

A3

A2

A1

A0

X

X

D7

D6

D5

D4

D3

D2

D1

D0

The start bit consists of a falling edge on SDA while SCLK is held high. With the exception of the start bit, all transitions on SDA occur while SCLK is low. The read cycle is driven on the bus as shown in Figure 8. For more information see Section , Serial ROM Control Register.



1.8.2 L2 Slot Signal Descriptions

Table 11 describes the signals used to interface the L2 tag/SRAM card to the motherboard. These signals are carried by a 182 pin connector, which provides the 72-bit data and parity bus from the CPU bus, the tagRAM and SRAM address bus, and the tagRAM and SRAM control signals. In Table 11, signals are labeled as inputs (I) or outputs (O) as viewed from the motherboard. Thus ADDR1 is shown as an (O) output because it is an output of the motherboard.

Table 11. L2 Slot Signal Descriptions

Signal

I/O

Level

Notes

Description

60X Bus

A[0:28]

I/O

1

Address Bus
Represents a portion of the 32 bit physical address of the current transaction. The address is valid from the bus cycle in which TS# is asserted through the bus cycle in which AACK# is asserted. These signals are big-endian, even though ADDR0 and AADR1 use little endian nomenclature.

Motherboard: These signals are connected to A[0:28] on the CPU slots.
L2 Card:

ADDR1

O

2

Address Bus Bit 1
Represents the LSB+1 bit physical address for asynchronous cache SRAM.

Motherboard: ADDR1, SRAM_CNT_EN0#, and SRAM_CNT_EN1# are tied togeth er on the motherboard (and connected to RESERVED 2 through a 0W resistor). These signals are connected to SRAM_CNT_EN#/ADDR1 on the 660 Bridge. While configured for asynchronous SRAM, the 660 Bridge drives this pin with the second- to-least significant address bit. While configured for synchronous SRAM, the 660 Bridge drives this pin with the SRAM count enable signal.

L2 Card:

ADDR0

O

2

Address Bus Bit 0
Represents the LSB bit physical address for asynchronous cache SRAM.

Motherboard: ADDR0, SRAM_ADS0#, and SRAM_ADS1# are tied together on the motherboard (and connected to RESERVED 1 through a 0W resistor). These signals are connected to SRAM_ADS#/ADDR0 on the 660 Bridge. While configured for asynchronous SRAM, the 660 Bridge drives this pin with the least significant address bit. While configured for synchronous SRAM, the 660 Bridge drives this pin with the SRAM address strobe.

L2 Card:

CLK[0:4]

O

1,2

System Bus Clocks
Bus clocks for the CPU bus. Tag/SRAM cards will typically use CLK[2] for tagRAM, CLK[0,1] for 512K SRAM cache, and CLK[0,1,3,4] for 1M SRAM cache

Motherboard: Nominally 66MHz or 60MHz depending on the FREQ_ID[0:3] of both CPU slots. If no L2 card is present, system firmware may disable these clocks via the Freeze Clock Registers at 8000 0860h and 8000 0862h. See section for details on the operation of these ports.

L2 Cards: To ensure minimal clock skew with respect to the other CPU bus clocks, make these nets 3 in. +/- 0.1 in.

DH[0:31]

I/O

1

Data Bus High
Most significant 4 bytes of the data bus.

Data Bus Signals Byte Lane
DH[0:7] 0 (MSB)
DH[8:15] 1
DH[16:23] 2
DH[24:31] 3

Motherboard: These signals are connected to DH[0:31] on the CPU slot.
CPU Card:

DL[0:31]

I/O

1

Data Bus Low
Least significant four bytes of the data bus.

Data Bus Signals Byte Lane
DL[0:7] 4
DL[8:15] 5
DL[16:23] 6
DL[24:31] 7 (LSB)

Motherboard: These signals are connected to DL[0:31] on the CPU slot.
L2 Card:

DP[0:7]

I/O

1

Data Bus Parity
Indicates the parity by byte of the CPU data bus (1 = odd, 0 = even ).

Parity Bit Byte Lane Parity Bit Byte Lane
DP[0] 0 DP[4] 4
DP[1] 1 DP[5] 5
DP[2] 2 DP[6] 6
DP[3] 3 DP[7] 7

Motherboard: The 660 Bridge checks data bus parity (using DP[0:7], DH[0:31], and DL[0:31] ) during CPU writes. During CPU read transfers, the 660 Bridge drives DP[0:7] with the (odd) parity information stored in the DRAM (if there is an L2 hit, the L2 supplies the stored parity information).
These signals are connected to DP[0:7] on the CPU slot.

L2 Card:

PD[0:3]

I/0

Presence Detect
As inputs, these signal tell the motherboard firmware what type of tag/SRAM card is present in the L2 slot. They can also be used to read a serial ROM that can be at tached to PD[0:1], and which contains configuration information about the card in the slot (see Section , CPU Card).

Motherboard: 10K pullup. The status of PD[0:3] can be read via the L2 PD Register 8000 080Dh bits [3:0], and have the following meanings.

Bit 3 Bit 2 Bit 1 Bit 0 Slot contains
0 0 0 0 Burst 256K No Parity
0 0 0 1 Burst 512K No Parity
0 0 1 0 Burst 1M No Parity
0 0 1 1 Reserved

0 1 0 0 Burst 256K w/Parity
0 1 0 1 Burst 512K w/Parity
0 1 1 0 Burst 1M w/Parity
0 1 1 1 Reserved

1 0 0 0 Asynch 256K No Parity
1 0 0 1 Asynch 512K No Parity
1 0 1 0 Asynch 1M No Parity
1 0 1 1 Reserved

1 1 0 0 Asynch 256K w/Parity
1 1 0 1 Asynch 512K w/Parity
1 1 1 0 Asynch 1M w/Parity
1 1 1 1 No Cache Present or Serial ROM Present *
If bits 3:0 of port 80D are 1, then either no L2 card is present or a serial ROM with configuration information is present. The motherboard can read the serial ROM by driving PD[0] with the serial clock and reading or writing data on PD[1]. This interface is controlled via the Serial ROM control Register 8000 0868h.

L2 Card: 100W resistor is used when required to pull down these signals.

SRAM_ADS[1:0]#

O

2

SRAM Address Strobes (Synchronous SRAM)
Address strobes for Burst SRAM. While asserted, burst SRAM latches in the address on the address lines.

Motherboard: See ADDR0.
L2 Card:

SRAM_ALE

O

2

SRAM Address Latch Enable (Asynchronous SRAM)
Enables latching of address for the SRAMs to support address pipelining when asynchronous SRAMS are used. Always high for burst SRAM

Motherboard:
L2 Card:

SRAM_CNT_EN
[1:0]#

O

2

SRAM Count Enables
Enables burst SRAM to increment the burst address on the clock.

Motherboard: See ADDR1.
L2 Card:

SRAM_OE[1:0]#

O

2

SRAM Output Enables
Enables the SRAM output drivers.

Motherboard:
L2 Card:

SRAM_WE[0:7]#

O

2

SRAM Write Enables
Enables updating of SRAM with data from the CPU data bus

Motherboard: All eight WE# pins are wired to the same net on the motherboard. Write updates of less than 8 bytes are not supported.

L2 Card:

TAG_CLR#

O

2

TagRAM Clear
Indicates that all entries in the tagRAM should be invalidated.

Motherboard: This signal is also connected to (shared with) the CPU slot L2 signals. An active low pulse is generated on this signal for one CPU clock whenever a write is performed to 8000 0814h.

L2 Card:

TAG_MATCH

I

2

Tag Match
Indicates that a hit in the tagRAM has occurred.

Motherboard: 200 ohm pullup

L2 Card:

TAG_VALID

O

2

Tag Valid
Indicates that the current block should be marked valid in the tagRAM.

Mother board:
L2 Card:

TAG_WE#

O

2

TagRAM Write Enable
Enables the tagRAM to be updated with the current address tag.

Motherboard:
L2 Card:

TAG_OE#

O

TagRAM Output Enable
Enables the tagRAM output to drive the address bus.

Motherboard: 10K pullup. Only for write back caches, which are not supported by the motherboard.

L2 Card:

DIRTYIN

O

Dirty In

Motherboard: 1K pull-down. Unsupported function on motherboard.

L2 Card:

DIRTYOUT

I

Dirty Output

Motherboard: No connection. Unsupported function on motherboard.

L2 Card:

STANDBY

I

Standby Power Mode
Indicate that the TAG and SRAMS are to be placed in a low power state.

Motherboard: 1K pulldown.

L2 Card:

GND

Logic Ground
Motherboard: These pins are connected to the +5v return pins of the power connec tor, and the the devices on the motherboard.

VCC5

+5.0v supply

Motherboard: These pins are connected to the +5v pins of the power connector and the 5v logic on the motherboard.

L2 Card:

3.3V

3.3v supply

Motherboard: These pins are connected to the 3.3v pins of the power supply con nector and the 3.3v logic on the motherboard

L2 Card:

Notes

1. Refer to the PowerPC 604 Users Manual and The IBM27-82660 PowerPC to PCI Bridge User's Manual for the details of defini tions and timing of the signal. These signals conform to the function defined in these specifications.

2. Refer to The IBM27-82660 PowerPC to PCI Bridge User's Manual for the details of definitions and timing of the signal. These signals conform to the function defined in this specifications.

1.8.3 L2 Slot DC Characteristics

Table 12. L2 Slot DC Characteristics (2)

SYM

Parameter

Min

Max

Unit

VIH

Input High Voltage

2.0

3.78

v1

VIL

Input Low Voltage

0.0

0.8

v

IIH

Input High Current

100

uA

IIL

Input Low Current

100

uA

VOH

Output High Voltage

2.4

3.78

v1

VOL

Output Low Voltage

0.0

0.4

v

IOH

Output High Current

1.0

mA

IOL

Output Low Current

1.0

mA

ITS

Tristate Leakage Current

100

uA

CF

Signal Pin Capacitance

20

pF

Notes:

1. All devices on CPU Bus must be 5.0v tolerant. Use of 5.0v SRAM is not recommended due to the possi bility that additional slewing time required for 5.0v operation may limit maximum frequency.

2. These specifications are for the L2 slot envelope. They describe the resources that are supplied to the L2 card by the system, and the constraints placed on the L2 card by the system.

1.8.4 L2 Slot AC Timing

Table 13. L2 Slot AC Timing (5)

Parameter

CPU Bus Signal

60 MHz1

66MHz1

Unit

Min

Max

Min

Max

Clock Period

BUS_CLK(n)

16.6

15

nS

Clock Duty Cycle

BUS_CLK(n)

40

60

40

60

%

Clock Skew 2

BUS_CLK(n)

.75

.75

nS

Clock Trace Length

BUS_CLK(n)

2.9

3.1

2.9

2.9

inches3

Clock Net
Loading

BUS_CLK(n)

10

10

pF3

Trace Impedance

All Critical Nets 4

Ohms

Critical Signal
Trace lengths

All Critical Nets 4

2

2

inches

Notes:

1) CPU Bus speed is determined by the capabilities of processor(s) cards installed in the system (see Section for determination of bus speed.

2) Skew between BUS_CLKs of other devices on the CPU bus when L2 Tag/SRAM card meet requirements of note 3.

3) BUS_CLK(n) should be wire point to point between connector and single load at the trace length specified to match other clock nets on CPU bus, trace length maybe varied to adjust clock skew of devices on CPU cards.

4) Critical nets should be wired point to point. These nets include: DP[0:7], A[0:31], DL[0:31], DH[0:31]. See Section 1.10 for information on the motherboard implementation of these nets.

5) These specifications are for the L2 slot envelope. They describe the resources that are supplied to the L2 card by the system, and the constraints placed on the L2 card by the system.

1.8.5 L2 Slot Power Supplies

Table 14 shows the power supplies that are available to the L2 card.

Table 14. L2 Slot Power Supplies (2)

Voltage

Regulation

Maximum Current

+ 5v

+/-5%

2.0A

+3.3v

+3.6v / +3.0v

2.0A

Notes:

1) Combined power consumption must be less than 9W.

2) These specifications are for the L2 slot envelope. They describe the resources that are supplied to the L2 card by the system, and the constraints placed on the L2 card by the system.

1.8.6 L2 Slot Thermal Envelope

The reference design L2 tag/SRAM card is defined to dissipate a maximum of 9W. The reference design is intended for a broad range of applications. The particular physical system implementation of the motherboard, the CPU board, the tag/SRAM card, and the enclosure that was tested in our lab delivers air to the L2 card at an average of 25 linear feet per minute (50 to 66 fpm on the side of the L2 card nearest the CPU cards, and 6 to 23 fpm (reverse flow) on the other side of the L2 card). This requires the ambient temperature to be maintained at a maximum of 345C in order to adequately cool the CPU card. Figure 9 shows the airflow direction in the reference system.

The information presented here is intended only for reference, and is not presented as a solution to the thermal challenges of any particular application. Conduct independent thermal design, analysis, and testing of the particular system implementation.

1.8.7 L2 Slot Dual Voltage Capability

The L2 Tag/SRAM cards designed for this connector may use either 5v or 3.3v data outputs. The motherboard has been designed to tolerate 5v signals on the CPU bus and the tag/SRAM interface; however, due the increase in slewing time for 5v signals, the CPU bus maximum operating frequency can be reduced when 5v data SRAM is used. It is recommended that 3.3v output data SRAM be used.

Because some motherboards may be designed which are not 5v tolerant, the L2 card design allows the motherboard connector to be keyed such that it will not accept tag/SRAM cards that require VDD = 5v. See Figure 9.

Tag/SRAM connector pins 66, 67, 157, and 158 are assigned as +5v power pins. Motherboards that are not 5v tolerant will use a connector which replaces these pins with a blocking key. All 3.3v output tag/SRAM cards will have a notch cut at the corresponding position. This will allow a tag/SRAM card with 3.3v data outputs to be plugged into either a 5v tolerant board or one that is only 3.3v tolerant. Tag/SRAM cards using 5v outputs must not have the notch cut at this position. The blocking key on the motherboard connector prevents the 5v tag/SRAM card from plugging into a motherboard which cannot tolerate their output voltage level.

Even though the above four +5v pins are removed, other +5v pins remain available for use. This allows 3.3v cards to use 5v tagRAM and SRAM which has 3.3v output levels.

Pin numbering is consistent between the 5v and 3.3v tag/SRAM cards. This means that on 3.3 volt cards, pins 66, 67, 157, and 158 are missing.

See Section for more information.

1.8.8 L2 Slot Connector

The L2 tag/SRAM connector is a 2X92 pin type connector with 1.27mm pin spacing. Figure 10 shows an end view of the connector (socket). Table 15 shows the connector pin assignments. This is a 5v tolerant connector.

1.8.9 L2 Slot Pin Assignments

Table 15. L2 Slot Pinout

Pin No.

Signal Name

1

GND

2

PD0/IDS_CLK

3

PD2

4

DH30

5

DH28

6

DH26

7

DH24

8

VCC3.3

9

DP3

10

DH22

11

DH20

12

DH19

13

GND

14

DH17

15

DP2

16

DH15

17

DH12

18

VCC5

19

DH11

20

DH9

21

DP1

22

DH7

23

VCC3.3

24

DH5

25

DH3

26

DH2

27

DH0

28

DP0

29

GND

30

CLK1

31

GND

32

DL28

33

DL26

34

DL24

35

DP7

Pin No.

Signal Name

36

VCC5

37

DL22

38

DL20

39

DL18

40

DL16

41

GND

42

DP6

43

DL14

44

DL12

45

DL11

46

GND

47

DL9

48

DP5

49

DL7

50

DL4

51

VCC3.3

52

DL3

53

DL1

54

DL0

55

GND

56

CLK2 (for TagRAM)

57

GND

58

DP4

59

SRAM_OE0#

60

SRAM_OE1#

61

VCC3.3

62

ADDR0

63

RESERVED

64

SRAM_ADS0#

65

SRAM_ADS1#

66

VCC5

67

VCC5

68

A28

69

A26

70

A25

Pin No.

Signal Name

71

A23

72

GND

73

A21

74

A19

75

A17

76

A13

77

VCC3.3

78

A12

79

A11

80

A9

81

GND

82

A7

83

A5

84

A3

85

A0

86

VCC5

87

TAG_CLR#

88

TAG_MATCH

89

TAG_OE#

90

DIRTYIN

91

GND

92

GND

93

PD1/IDS_DATA

94

PD3

95

DH31

96

DH29

97

DH27

98

DH25

99

VCC3.3

100

SRAM_WE3#

101

DH23

102

DH21

103

DH18

104

GND

105

DH16

106

SRAM_WE2#

107

DH14

108

DH13

Pin No.

Signal Name

109

VCC5

110

DH10

111

DH8

112

SRAM_WE1#

113

DH6

114

VCC3.3

115

DH4

116

GND

117

CLK0

118

GND

119

DH1

120

SRAM_WE0#

121

DL31

122

DL30

123

GND

124

DL29

125

DL27

126

DL25

127

VCC5

128

SRAM_WE7#

129

DL23

130

DL21

131

DL19

132

GND

133

DL17

134

SRAM_WE6#

135

DL15

136

DL13

137

GND

138

DL10

139

DL8

140

SRAM_WE5#

141

DL6

142

VCC3.3

143

DL5

144

DL2

145

GND

146

CLK3

Pin No.

Signal Name

147

GND

148

CLK4

149

GND

150

SRAM_WE4#

151

SRAM_ALE

152

VCC3.3

153

ADDR1

154

RESERVED

155

SRAM_CNT_EN0#

156

SRAM_CNT_EN1#

157

VCC5

158

VCC5

159

A27

160

A24

161

A22

162

A20

163

GND

164

A18

Pin No.

Signal Name

165

A16

166

A15

167

A14

168

VCC3.3

169

A10

170

A8

171

A6

172

GND

173

A4

174

A2

175

A1

176

BURST_MODE

177

VCC5

178

TAG_VALID

179

TAG_WE#

180

STANDBY

181

DIRTYOUT

182

GND

1.9 JTAG/RISCWatch Interface

The PowerPC family of processors provides a diagnostic interface for the hardware and software developer. This interface uses standard JTAG connections to allow the RISCWatch development tool access to the processor. In systems with more than one device, a multiple device JTAG loop can be created. Figure 11 shows the single RISCWatch interface connector (J18) on the motherboard, which can be connected to each CPU card slot individually or to both CPU card slots in series by changing Jumper positions on J81. Table 16 shows the pin assignments.

Table 16. J18 Pin Assignments

Pin No.

Signal Name

1

CHECK_STOP#

2

HRESET_ESP#

3

SRESET_ESP#

4

CNTL/SCAN_DATA (TMS)

5

SHIFT_CLOCK (TCK)

6

RUN/BREAKPOINT

7

SCAN_IN (TDI)

8

SCAN_OUT (TDO)

Pin No.

Signal Name

9

GND

10

KEY

11

GND

12

RESERVED1

13

RESERVED2

14

+3.3V

15

OCS_OVERRIDE (TRST)

16

RESERVED3

1.10 Electrical Model of Major Signal Groups

This section describes an electrical model of the critical signal paths in the reference design. The complete model of a path consists of the 660 Bridge model, the model of the signal paths on the motherboard, the CPU and L2 slot connector models, the model of the L2 card, and the model of the CPU card(s).

1.10.1 Motherboard Electrical Model

The physical signal paths (traces) of some major signal groups are modeled in the following figures, where:

1.10.2 CPU Card and L2 Card Interface Models

Figure 16 shows the model of the CPU slot interfaces, and Figure 17 shows the model of the L2 slot interface. Both interfaces are modeled in the mated condition, and the values shown are typical.

1.10.3 CPU Card Model

See the Electrical Model of Major Signal Groups in the CPU Card section for the electrical model of the Cheetah3 PowerPC 604 CPU Card. For other CPU cards, see their data sheet for model information.

1.10.4 L2 Card Model

See the data sheet of the L2 Card (if any) for electrical model information.

1.10.5 660 Bridge Electrical Model

For the 660 Bridge electrical model, see the 660 Bridge User's Manual.

1.10.6 Model Building

When constructing the combined model of the net with the interfaces, connect the IN node of the interface model to the motherboard if the signal is an output from the motherboard. On the other hand, if the signal is an output of the plug-in card, connect the IN node of the interface model to the plug-in card. The model of the CPU (or L2) card is placed on the other side of the interface model.

For example, Figure 18 models the address lines during a CPU bus operation mastered by the CPU card in slot 1. The IN node of the slot 1 model connects to the CPU card. The IN node of the other slot models connect to the motherboard.

If a particular card is not installed, the electrical model of that slot is still to be used. Connect the IN node of the slot model to the motherboard net, and leave the OUT node unconnected.