GUIDE, Apr 2018, Version 2018.1
13
www.cobham.com/gaisler
LEON/GRLIB Guide
3.3.5 Configuration Settings For Existing LEON Devices
The table below shows configurations for existing Cobham/Aeroflex LEON devices. Please refer to
the previous subsections under section 3.3 for comments and descriptions of the different values.
TABLE 9. LEON processor configurations
VHDL
generic
UT699
Value
UT700
Value
GR712RC
value
GR740
Value
LEON3-
RTAX
example
value
dsu
1
1
1
1
1
fpu
2
2
2
2
0
v8
2
16#32#+4
2
16#32#
0
mac
0
0
0
0
0
nwp
4
4
2
4
2
icen
1
1
1
1
1
isets
2
4
4
4
1
isetsize
4
4
4
4
8
irepl
0
0
0
0
0
ilinesize
8
8
8
8
8
dcen
1
1
1
1
1
dsets
2
4
4
4
1
dsetsize
4
4
4
4
4
drepl
0
0
0
0
0
dlinesize
4
4
4
8
4
dnsoop
6
6
6
6
0
mmuen
1
1
1
1
0
itlbnum /
dtlbnum
16 / 16
16 / 16
16 / 16
16 / 16
- / -
tlb_type
0
2
2
2
0
tlb_rep
0
0
0
0
0
lddel
2
1
1
1
1
tbuf
2
4
4
8
2
pwd
2
2
2
2
2
svt
1
1
1
1
1
smp
0
0
1
1
0
bp
N/A (0)
1
1
N/A
0
npasi
N/A (0)
N/A (0)
N/A (0)
1
N/A (0)
pwrpsr
N/A (0)
N/A (0)
N/A (0)
1
N/A (0)
LEON ver-
sion used
LEON3FT
v1
LEON3FT
v2
LEON3FT
v1 with BP
LEON4v0
LEON3FTv1
to LEON3v3
3.4
LEON subsystem (gaisler.subsys.leon_dsu_stat_base)
GRLIB contains a subsystem component that can be used to instantiate the LEON processor, debug
support unit and a statistics unit (performance counters). The subsystem is available in lib/gaisler/
subsys/ and also has a corresponding xconfig script. Please refer to GRLIB IP Core User’s Manual
(grip.pdf) for documentation of LEON_DSU_STAT_BASE.
GUIDE, Apr 2018, Version 2018.1
14
www.cobham.com/gaisler
LEON/GRLIB Guide
4
Multiple Buses, Clock Domains and Clock Gating
4.1
Introduction
This section describes some techniques that can be used with GRLIB to create more complex system
architectures with multiple buses and/or clock domains.
Peripheral IP cores that need to work at a separate clock domain usually have their own clocking and
synchronization built in. This is not explained here, see the core-specific documentation.
4.2
Creating Multi-Bus Systems
4.2.1 Overview
The on-chip bus may become a bottle neck in systems where the processors and peripherals all share
the same bus. The fact that all IP cores are connected together may also introduce high loads in the
system, which can lead to timing issues at implementation. These issues can be solved by partitioning
the system into several AHB buses.
4.2.2 GRLIB Facilities
In order to partition the system into multiple buses, the general-purpose AHB bridge IP cores AHB-
BRIDGE (uni-directional) and AHB2AHB (bi-directional) are included in GRLIB. There are also
special-purpose cores, such as the IOMMU and L2-cache, that have bridge functionality built into
them.
4.2.3 GRLIB AMBA Plug&Play in Multi-Bus Systems
Software and debug monitors such as GRMON can detect all IP cores connected to the on-chip
bus(es) by scanning the plug&play configuration area. The format and function of this area is
described in the GRLIB User’s Manual and in the GRLIB IP Core User’s Manual documentation for
the AHB controller (AHBCTRL) and AHB/APB bridge (APBCTRL).
In multi-bus systems, each bus will have its own AMBA plug&play configuration area and software
must be able to access all plug&play areas In order for software able to discover all peripherals in a
system. The same applies for the GRMON debug monitor, to discover all peripherals the debug com-
munication link master interface must be connected to a bus from where it can access all plug&play
areas (as well as memory where peripheral registers are mapped).
The plug&play scanning routines discover the presence of multiple AHB buses when it discovers the
slave interface a core such as the Level-2 cache or AHB/AHB bridge (AHB2AHB, AHBBRIDGE).
Upon discovery of a bridge the routine will typically look in the user defined register of the bridge’s
plug&play information to get the base address of the AHB I/O and plug&play area of the second bus.
Excatly how the base address of the plug&play information is communicated to the scanning routine
is specific for each core. The Level-2 cache and AHB/AHB bridges store this address in user defined
register 1 of the core’s AHB slave interface plug&play information. A value of zero in this register
signals to software that plug&play scanning should not be done for the second bus behind the bridge.
When software discovers a bridge to a new bus, scanning should commence using the new plug&play
area address (depth-first scanning) and once the new plug&play area has been handled scanning
should continue on the current bus.
Note that for plug&play scanning to work, all plug&play areas must be accessible from the AHB
master that performs the scan. This means that any bridge between AHB buses must have a window
that allows the plug&play area on the other side of the bridge to be accessed. System software and
debug tools by default start scanning for a plug&play area at the top of AMBA memory space. it is
important that the plug&play area located in this address has pointers so that all other plug&play
areas in the system can be discovered. For instance, the default plug&play area address should not be
occupied by the plug&play area of a bus that is only connected to the rest of the system via the AHB
master interface side of a Level-2 cache or uni-directional bridge. This is because the extra informa-
tion at the AHB master interface does not contain the base address for the plug&play area of the bus
on the AHB slave interface side of the bridge. As a result of this, plug&play scanning routines will
only find one bus in the system.