Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 135325

Article: 135325
Subject: Re: Use of divided clocks inside modules
From: Gael Paul <gael.paul@gmail.com>
Date: Fri, 26 Sep 2008 06:06:46 -0700 (PDT)
Links: << >>  << T >>  << A >>
Symon,

My two cents...

Multi-cycle paths constraints (as well as false paths constraints) are
inherently fine. However, they should be used only when necessary as
they dramatically increase the complexity of timing analysis, which
can lead to increased runtime and memory usage, and even QoR
degradation in extreme cases.

Indeed, timing-driven tools (synthesis, placement, routing) spend a
lot of time running timing analysis to select the next critical path
to work on, and to evaluate the impact of a given optimization (logic
optimization for synthesis, placement update, routing update). The
more exceptions they have to handle (and the more complex they are, in
particular -through points) the harder it is to handle them.

My advice is thus to only add the timing exceptions that apply to
critical paths (negative slack) as you really don't want your tool to
needlessly work on these paths (often at the expense of other -truly-
critical paths). At synthesis, I also suggest adding exceptions for
near-critical paths (e.g. slack < worst slack + ~0.5ns). Indeed, the
best synthesis tools (like Synplify Pro) actually work a little beyond
the most critical paths, in an attempt to compensate for estimation
errors. Getting these off the road is thus a good idea to help P&R
meeting timing.

Cheers,

 - gael



evaluate the benefit of optimizations and select the next critical
paths to work on).


Article: 135326
Subject: Re: Use of divided clocks inside modules
From: Andy <jonesandy@comcast.net>
Date: Fri, 26 Sep 2008 07:39:10 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Sep 26, 6:34=A0am, "Symon" <symon_bre...@hotmail.com> wrote:
> Personally, I use them all the time and have few problems; certainly fewe=
r
> problems than I would have if I tried to do the same thing with multiple
> clocks. In a situation where a block of logic can run at (say) half the
> clock rate, I give it an enable and set the constraints accordingly using
> the
> NET "clock_enable_signal*" TNM=3DTS1;
> thing in the Xilinx tools. (BTW, The asterisk is useful if the enable get=
s
> replicated by the synthesis tools) This lets the P&R tools concentrate
> effort where it's needed.
>
> Of course, avoiding multi-cycle constraints by using a divided clock woul=
d
> be bad.
>
> Can you give an example where you've run into problems?
>
> Thanks, Symon.

Using the clock_enable_signal as the defining constraint only works if
all the INPUTS to those register come from registers that were also
(and always) disabled by the same signal. If the source or destination
registers are ever enabled at other times when clock_enable_signal is
false, look out.

Andy

Article: 135327
Subject: Re: Use of divided clocks inside modules
From: "Symon" <symon_brewer@hotmail.com>
Date: Fri, 26 Sep 2008 15:52:09 +0100
Links: << >>  << T >>  << A >>
Andy wrote:
>
> Using the clock_enable_signal as the defining constraint only works if
> all the INPUTS to those register come from registers that were also
> (and always) disabled by the same signal. If the source or destination
> registers are ever enabled at other times when clock_enable_signal is
> false, look out.
>
> Andy

Hi Andy,

Absolutely. However, the tools (are meant to) take care of all that.

If you see a problem with constraints like this one below, I really would 
like to know about it.

NET "clock_enable_signal*" TNM=FFS "enabled_ffs";
TIMESPEC TS1 = FROM : enabled_ffs : TO : enabled_ffs : 20ns;

Thanks, Symon.

p.s. Here a link to the TNM documentation. 



Article: 135328
Subject: Re: Use of divided clocks inside modules
From: "Symon" <symon_brewer@hotmail.com>
Date: Fri, 26 Sep 2008 16:08:02 +0100
Links: << >>  << T >>  << A >>
Gael Paul wrote:
> Symon,
>
> My two cents...
>
> Multi-cycle paths constraints (as well as false paths constraints) are
> inherently fine. However, they should be used only when necessary as
> they dramatically increase the complexity of timing analysis, which
> can lead to increased runtime and memory usage, and even QoR
> degradation in extreme cases.
>
> Indeed, timing-driven tools (synthesis, placement, routing) spend a
> lot of time running timing analysis to select the next critical path
> to work on, and to evaluate the impact of a given optimization (logic
> optimization for synthesis, placement update, routing update). The
> more exceptions they have to handle (and the more complex they are, in
> particular -through points) the harder it is to handle them.
>
> My advice is thus to only add the timing exceptions that apply to
> critical paths (negative slack) as you really don't want your tool to
> needlessly work on these paths (often at the expense of other -truly-
> critical paths). At synthesis, I also suggest adding exceptions for
> near-critical paths (e.g. slack < worst slack + ~0.5ns). Indeed, the
> best synthesis tools (like Synplify Pro) actually work a little beyond
> the most critical paths, in an attempt to compensate for estimation
> errors. Getting these off the road is thus a good idea to help P&R
> meeting timing.
>
> Cheers,
>
> - gael
>
>
Hi Gael,

I may or may not agree with you! :-) I'd be grateful to know what you think 
about this...

Say I have a block which is clocked at 200MHz, but only actually needs to 
run at 100MHz, and I use a clock enable to control the block. Now let's look 
at some scenarios.

1) The block can easily run at 200MHz. Thus there seems no point in using a 
multi-cycle path.

2) The block can only just run at 200MHz. In this case I would use the 
multi-cycle constraint for the whole block as experience has shown me that 
this dramatically improves P&R times. My suspicion is that the tool has to 
analyse every path anyway. Adding extra exceptions isn't a big deal for it, 
and the slacker requirements reduce the time needed to P&R.

3) The most of the block will run at 200MHz, but there are a few critical 
paths which will not. However these critical paths will easily run at 
100MHz. In this case I would again use multi-cycle enable, and I'd put it on 
the whole block as it's easier to do that than specifying just the critical 
path.

4) Large parts of the block barely run at 100MHz. Clearly the multi-cycle 
constraint is necessary.

OK, I think we agree on (1) and (4). What would you do for (2) and (3)?

Thanks, Symon.





Article: 135329
Subject: Re: Having problems with using flash in EDK
From: "MM" <mbmsv@yahoo.com>
Date: Fri, 26 Sep 2008 11:10:01 -0400
Links: << >>  << T >>  << A >>
Run the Flash programmer under debugger and see what the problem is. 
Alternatively you can see what's going on with the ChipScope.

/Mikhail


"Goli" <togoli@gmail.com> wrote in message 
news:436789c8-d38d-49a3-b09f-79acb2fa684c@a18g2000pra.googlegroups.com...
> Hi,
>
> I am using EDK10.1, and I have Atmel 32MB flash on my board,
> (AT49BV322D). But whenever I try to program the flash through the
> Device configuration, Program Flash tab it says, Unable to query part
> layout using CFI.
>
> I have checked all the connections and even probed the signals, they
> all seem to be fine.
>
> Is there anything special that I have to do?
>
> --
> Goli 



Article: 135330
Subject: Re: Use of divided clocks inside modules
From: Gael Paul <gael.paul@gmail.com>
Date: Fri, 26 Sep 2008 08:31:55 -0700 (PDT)
Links: << >>  << T >>  << A >>
Symon,

It would seems we fully agree:
 - case 1) I would indeed not add the multicycle constraint.
 - case 2) I would add it on the whole block; that's the 'near-
critical' scenario.
 - case 3) and 4) I would also add the the constraint on the whole
block.
Here's why: in your example, there is in fact one multicycle
constraint: "define_multicycle_path -from clock_enable_reg 2" (SDC
syntax). That is in fact the simplest form of a timing exception, and
thus the easiest for tools to handle. Partitioning this constraints
for a subset of paths starting from the clock_enable would actually
create a more complicated constraint:  "define_multicycle_path -from
clock_enable_reg -to {A_reg B_reg C_reg D_reg ... } 2". In fact, this
creates N constraints, where N is the number of end points. Clearly,
that represents more work for tools to track and analyze (more
constraints, more complex constraints).

Now, let's expand your example with 4 independent blocks identical to
your example, each with a different clock_enablepo. Here, I would
choose to add the full multi-cycle constraint only for the blocks that
don't meet their clock constraint. This better illustrates how one can
selectively add the 'useful' timing exceptions.

Cheers,

 - gael

Article: 135331
Subject: Clocking Sync Burst SRAM
From: Tommy Thorn <tommy.thorn@gmail.com>
Date: Fri, 26 Sep 2008 11:34:14 -0700 (PDT)
Links: << >>  << T >>  << A >>
I wrote a little controller + tester app for the SSRAM on Terasic's
DE2-70 which is rated for 200 MHz. I have gotten it working @ 170 MHz,
but I'm a little uneasy about the SSRAM clock. Being a non-EE I
suspect I'm missing something fundamental here.

First, all output are fully registered (and constrained to guarantee
they stay registered). The main logic is clocked by a PLL. A second
but identically configured output on this PLL drives the SSRAM. The
SSRAM datasheet lists the setup and hold times for the inputs as 1.4
ns / 0.4 ns respectively. What is the correct way to achieve this?
Phase-shift the SSRAM clock? Specify it as a timing constraint? Do
timing constraints influence the output buffer or are they purely for
checking?

Keeping the clock in phase with the main clock led to errors showing
up (once beyond 100 MHz), but shifting it a few degrees made it
perfectly stable again. However, what is the appropriate engineering
approach to this source synchronous problem?

Any help would be much appreciated.

Thanks,
Tommy


FWIW, these are the constraints I'm currently using:

# timing constraints for SSRAM
set_instance_assignment -name FAST_OUTPUT_REGISTER ON -to oSRAM*
set_instance_assignment -name FAST_OUTPUT_REGISTER ON -to SRAM_*
set_instance_assignment -name FAST_OUTPUT_ENABLE_REGISTER ON -to
SRAM_*
set_instance_assignment -name TCO_REQUIREMENT "3 ns" -to oSRAM*
set_instance_assignment -name TCO_REQUIREMENT "3 ns" -to SRAM*
set_instance_assignment -name TSU_REQUIREMENT "2.2 ns" -to SRAM*

# other default timings
set_global_assignment -name TSU_REQUIREMENT "5 ns"
set_global_assignment -name TCO_REQUIREMENT "10 ns"

Article: 135332
Subject: maximum clock rating
From: uraniumore238@gmail.com
Date: Fri, 26 Sep 2008 11:49:45 -0700 (PDT)
Links: << >>  << T >>  << A >>
I am using a XILINX  Spartan II  XC2S50 board and would like to know
what the maximum clock rating for the chip is and if I can clock it
with an in-coming clock osc. at 150 MHz...

Article: 135333
Subject: Re: maximum clock rating
From: Tommy Thorn <tommy.thorn@gmail.com>
Date: Fri, 26 Sep 2008 11:56:32 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Sep 26, 11:49=A0am, uraniumore...@gmail.com wrote:
> I am using a XILINX =A0Spartan II =A0XC2S50 board and would like to know
> what the maximum clock rating for the chip is and if I can clock it
> with an in-coming clock osc. at 150 MHz...

Absolutely!






While this answer is technically correct, it's not very useful, but
fully answering this question requires a tutorial in digital logic and
FPGA designs. Please google for an introductory text on the subject.

Tommy

Article: 135334
Subject: Re: Clocking Sync Burst SRAM
From: KJ <kkjennings@sbcglobal.net>
Date: Fri, 26 Sep 2008 12:14:14 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Sep 26, 2:34=A0pm, Tommy Thorn <tommy.th...@gmail.com> wrote:
> I wrote a little controller + tester app for the SSRAM on Terasic's
> DE2-70 which is rated for 200 MHz. I have gotten it working @ 170 MHz,
> but I'm a little uneasy about the SSRAM clock. Being a non-EE I
> suspect I'm missing something fundamental here.
>
> First, all output are fully registered (and constrained to guarantee
> they stay registered). The main logic is clocked by a PLL. A second
> but identically configured output on this PLL drives the SSRAM.

Keep in mind though that there can easily be different delays from
these two PLL outputs to the different destinations.

> The
> SSRAM datasheet lists the setup and hold times for the inputs as 1.4
> ns / 0.4 ns respectively. What is the correct way to achieve this?

You'll need to know...
- What is an achievable skew between the clock at the internal flops
and the clock as it is leaving your controller.  In some sense it
doesn't matter too much what that actual skew is, but you need to know
what it is so that you can then add a timing constraint so that this
delay is always met, or flagged as a timing error for you.  For the
sake of an example, let's just say that that there is a skew of 1 ns
between the internal clock and the clock at the output of the FPGA.
Keep in mind that this skew will have both a minimum and a maximum so
the skew is really a range between those two extremes.

- Net lengths and capacitive loading of the signals on the PCBA that
go between the two devices.  What's important here is really
differences between the various SRAM inputs and the clock.  From that
you calculate an additional delay.  Practically speaking, you most
likely have roughly equal net lengths and loading on all of the
signals and this is not going to be a concern, but you should at least
be aware of this as well.  Different parts may have different
capacitive loading so if you want to get nit picky this delay will
also be a range with a min and a max but that range will typically be
much smaller than the uncertainty with the FPGA.

From the FPGA clock skew min/max add on the additional delay for
length/loading differences and now you have a known window of clock
uncertainty.  Now get a piece of paper and sketch out some waveforms
showing the min/max switching times of the control signals (i.e.
address, oe, write, data_in, data_out) as well as the setup/hold time
of both the SRAM and the FPGA (for the data coming back in).  Somebody
was advertising a free timing waveform tool out here a few months back
(I don't remember the name), that may help but it's not that difficult
to paper sketch it either.

From all of that you should come out with a sketch that shows where
things need to be valid in order for the system to read and write
properly.  Adjust the nominal phase of the clock leaving the device
(or equivalently the FPGA internal clock) so that the clock occurs
(both min and max time) at a point where everything is stable.  Keep
in mind that as you shove the clock one way to improve setup time at
the SRAM, you're most likely stealing that from the setup time at the
FPGA when it is reading data back.

There can also be other concerns like if the nets are long you'll get
ringing which distorts the waveforms which basically means that you'll
need to wait a longer for things to stabilize which cuts into the
allowable timing.  At 100 MHz, just 1 ns is 10% of the clock cycle
budget.  Whether or not that's an issue or not you'll need to
determine with a scope.

> Phase-shift the SSRAM clock?
Yes

> Specify it as a timing constraint?
Yes

> Do timing constraints influence the output buffer or are they purely for
> checking?

For the most part it's just checking, although it can affect place and
route as well.  I haven't seen a case where it affected the output
buffer itself (i.e. kicked up or down the drive strength) in order to
meet a constraint.  This is most likely because drive strength
considerations have a much larger impact than just timing.  It does go
the other way though, as you fiddle with drive strength the software
should take this into account when it does the timing analysis.

Since it appears from your constraint that you're using Quartus, you
might want to put in numbers that are representative of the correct
capacitive loading as well as checking that the I/O drive strengths
are appropriate and not just the defaults (unless  you've already done
this).

KJ

Article: 135335
Subject: Re: Clocking Sync Burst SRAM
From: jhallen@TheWorld.com (Joseph H Allen)
Date: Fri, 26 Sep 2008 19:37:16 +0000 (UTC)
Links: << >>  << T >>  << A >>
There are a number of ways to do this.  Here is one way:

What you usually want is for the clock at the pin of the SRAM chip to rise
at the same time as the FPGA internal global clock driving the output pads.

A way to do this is to route the clock from the second PLL to two output
pins (right next to each other).  One pin goes to the SSRAM.  The other pin
goes to the feedback input of the PLL.  The trace length for this feedback
line should be the same as the one which goes to the RAM.  You have to use a
dedicated PLL feedback input pin for this to work.  This way is nice because
you can usually leave the PLL phase shift setting at 0.

You have to use an "enhanced" PLL with external feedback pin and set it to
use external feedback pin mode.

In article <11ffca10-bec7-4cb7-bed4-458a3a8743dd@v13g2000pro.googlegroups.com>,
Tommy Thorn  <tommy.thorn@gmail.com> wrote:
>I wrote a little controller + tester app for the SSRAM on Terasic's
>DE2-70 which is rated for 200 MHz. I have gotten it working @ 170 MHz,
>but I'm a little uneasy about the SSRAM clock. Being a non-EE I
>suspect I'm missing something fundamental here.
>
>First, all output are fully registered (and constrained to guarantee
>they stay registered). The main logic is clocked by a PLL. A second
>but identically configured output on this PLL drives the SSRAM. The
>SSRAM datasheet lists the setup and hold times for the inputs as 1.4
>ns / 0.4 ns respectively. What is the correct way to achieve this?
>Phase-shift the SSRAM clock? Specify it as a timing constraint? Do
>timing constraints influence the output buffer or are they purely for
>checking?
>
>Keeping the clock in phase with the main clock led to errors showing
>up (once beyond 100 MHz), but shifting it a few degrees made it
>perfectly stable again. However, what is the appropriate engineering
>approach to this source synchronous problem?
>
>Any help would be much appreciated.
>
>Thanks,
>Tommy
>
>
>FWIW, these are the constraints I'm currently using:
>
># timing constraints for SSRAM
>set_instance_assignment -name FAST_OUTPUT_REGISTER ON -to oSRAM*
>set_instance_assignment -name FAST_OUTPUT_REGISTER ON -to SRAM_*
>set_instance_assignment -name FAST_OUTPUT_ENABLE_REGISTER ON -to
>SRAM_*
>set_instance_assignment -name TCO_REQUIREMENT "3 ns" -to oSRAM*
>set_instance_assignment -name TCO_REQUIREMENT "3 ns" -to SRAM*
>set_instance_assignment -name TSU_REQUIREMENT "2.2 ns" -to SRAM*
>
># other default timings
>set_global_assignment -name TSU_REQUIREMENT "5 ns"
>set_global_assignment -name TCO_REQUIREMENT "10 ns"


-- 
/*  jhallen@world.std.com AB1GO */                        /* Joseph H. Allen */
int a[1817];main(z,p,q,r){for(p=80;q+p-80;p-=2*a[p])for(z=9;z--;)q=3&(r=time(0)
+r*57)/7,q=q?q-1?q-2?1-p%79?-1:0:p%79-77?1:0:p<1659?79:0:p>158?-79:0,q?!a[p+q*2
]?a[p+=a[p+=q]=q]=q:0:0;for(;q++-1817;)printf(q%79?"%c":"%c\n"," #"[!a[q-1]]);}

Article: 135336
Subject: Does XST support global signals?
From: EM <employee3072@gmail.com>
Date: Fri, 26 Sep 2008 13:15:17 -0700 (PDT)
Links: << >>  << T >>  << A >>
Hello folks,


I searched this newsgroup, Xilinx's website, and the XST user's guide,
but I couldn't find what I was looking for.

Does anyone know if XST supports global nets?

In my last project I used Synplicity, which allowed me to use global
nets / buses.  In this new project I'm using XST.  XST itself doesn't
complain, but I'm getting goofy errors with NGDBUILD.  I'm just
wondering if any else has tried this.


Thanks in advance,
EM


P.S.  I'm using VHDL, if that matters.

Article: 135337
Subject: Re: Does XST support global signals?
From: Brian Davis <brimdavis@aol.com>
Date: Fri, 26 Sep 2008 18:35:18 -0700 (PDT)
Links: << >>  << T >>  << A >>
EM wrote:
>
> Does anyone know if XST supports global nets?
>
> In my last project I used Synplicity, which allowed me to use global
> nets / buses.  In this new project I'm using XST.  XST itself doesn't
> complain, but I'm getting goofy errors with NGDBUILD.  I'm just
> wondering if any else has tried this.
>

XST has supported signals in packages for a few years now.
  http://www.xilinx.com/support/answers/13895.htm

 I've used a record of signals in a package as global 'probes'
in both simulation and with XST 9.x, assigned in one spot
and read in another.

 Without a code snippet to illustrate your error, nor the NGDBUILD
error messages, it's tough to make any meaningful suggestions.

 Are you doing anything funky with the global signal, like trying
to write to it from two places?

 I'd also suggest tinkering with XST/PAR options to see whether
that affects the NGDBUILD error; e.g., turn off global optimization
and turn on the keep_hierarchy flags in XST and PAR option settings.

Brian

Article: 135338
Subject: Open source IP core development with configuration GUI
From: nezhate <mazouz.nezhate@gmail.com>
Date: Sat, 27 Sep 2008 00:22:18 -0700 (PDT)
Links: << >>  << T >>  << A >>
Hi there,
I'm interested to develop my own ip core. I would like it to look like
Xilinx's IPCore Generator. Is it possible? Or I have to develop The
GUI my self without using any ready made API or such?
I'm aiming to release my IP cores as open source. Any advises?

Best regards
Nezhate

Article: 135339
Subject: Re: Use of divided clocks inside modules
From: "Symon" <symon_brewer@hotmail.com>
Date: Sat, 27 Sep 2008 12:10:55 +0100
Links: << >>  << T >>  << A >>
Gael Paul wrote:
> Symon,
>
> It would seems we fully agree:

Hi Gael,
Thanks for your reply, that's the way I think also. I guess we're no good at 
this usenet thing! Two people actually agreeing? What is the internet coming 
to?
Best, Syms.

p.s. Another little trick I discovered is to force the synthesis tool to use 
connect the clock enable directly to the purpose built CE pins on the slice. 
Synplify has something like this:-
attribute direct_enable of clken :signal is true;
The CE pin has less setup time than using the F or G logic LUTs, and so 
helps the timing. It seems from experience that the synthesis tool doesn't 
do this by default, indeed it's possible that some FFs may have more than 
enable, and so the tool needs to be told which is the mutli-cycle enable to 
be used on the CE pin.



Article: 135340
Subject: Re: Open source IP core development with configuration GUI
From: Brian Drummond <brian_drummond@btconnect.com>
Date: Sat, 27 Sep 2008 12:57:31 +0100
Links: << >>  << T >>  << A >>
On Sat, 27 Sep 2008 00:22:18 -0700 (PDT), nezhate
<mazouz.nezhate@gmail.com> wrote:

>Hi there,
>I'm interested to develop my own ip core. I would like it to look like
>Xilinx's IPCore Generator. Is it possible? Or I have to develop The
>GUI my self without using any ready made API or such?
>I'm aiming to release my IP cores as open source. Any advises?

My suggestion would be to forget the GUI altogether; stick to plain
text, scripts to help build it, and VHDL.

Keep the top level of the design as clean and simple as possible, that
others can understand how to configure it and use it.

Customise its external view and major behaviour (e.g. bus widths)
through generics; keep any further controls in one package to be
modified as necessary by the user. Better still, supply default values
for the generics from the package too.

Include a simple example design which instantiates it as a component,
and another which instantiates the entity directly. If it is meant to be
synthesised separately and used as a black box, the examples should show
that too.

If you want a GUI, it can come later; it's much less important than
clarity, usability and understanding. You can make a simple GUI by just
writing a script which uses GDialog or Zenity. It doesn't do everything
you can do in a GUI, but a wizard-like series of dialogs with tick
boxes, choice buttons, and text/number entry etc is easy.
See e.g. http://www.linux.com/feature/114156
This will allow easy user selection of parameters; the script can then
write the VHDL package for you. 

Getting a GUI to run on all available platforms may be harder work;
another reason why the IP should be usable without it.

- Brian

Article: 135341
Subject: Re: Is it possible to get an RTL netlist from Xilinx tools?
From: thutt <thutt151@comcast.net>
Date: 27 Sep 2008 09:21:49 -0700
Links: << >>  << T >>  << A >>
David R Brooks <davebXXX@iinet.net.au> writes:

> thutt wrote:
> > Kevin Neilson <kevin_neilson@removethiscomcast.net> writes:
> > 
> >> thutt wrote:
> >>> Hello everyone,
> >>> I'm tired of using the Xilinx ISE to look at RTL schematics, mainly

<snip>

> > If there were a lookup table of initialization values to the standard
> > 'and / or / xor'-type logic that each represents, then it would be
> > quite possible to write a script to make an external RTL viewer based
> > on the technology netlist output by the Xilinx tools.
> > 
> > Anyone game for a little sleuthing to determine all the LUT init
> > codes?
> >
> 
> It's a bit harder than that. Assume a LUT has 4 inputs (it varies
> with different FPGA families). The synthesiser (eg XST) will look
> for a piece of combinatorial logic with 4 inputs & 1 output. It then
> simulates that logic, evaluating its output for all 16 possible
> input combinations. The initialisation string is simply the map of
> those 16 outputs, written out in hex.  So there really isn't an
> underlying logic representation: the designer might have specified
> any Boolean function whatever (adder, parity tree, etc): the
> synthesiser doesn't care.

I'm more confused now.  Somehow the LUT must correspond to some
concrete logic, and each LUT must be reconfigurable, right?

How does the netlist indicate what logic is implemented in the LUT?
If it's possible to determine that information, I'll be happy.

thutt
http://www.harp-project.com/
-- 
Hoegaarden!

Article: 135342
Subject: Re: Is it possible to get an RTL netlist from Xilinx tools?
From: thutt <thutt151@comcast.net>
Date: 27 Sep 2008 09:26:55 -0700
Links: << >>  << T >>  << A >>
Kevin Neilson <kevin_neilson@removethiscomcast.net> writes:

> thutt wrote:
> > Kevin Neilson <kevin_neilson@removethiscomcast.net> writes:
> >
> >> thutt wrote:
> >>> Hello everyone,
> >>> I'm tired of using the Xilinx ISE to look at RTL schematics, mainly
> >>> because it's so slow and cumbersome to use.  What I'd like to do is
> >>> output a netlist and use another script to process that netlist into

<snip>

> > 'and / or / xor'-type logic that each represents, then it would be
> > quite possible to write a script to make an external RTL viewer based
> > on the technology netlist output by the Xilinx tools.
> > Anyone game for a little sleuthing to determine all the LUT init
> > codes?
> > thutt

> The problem with turning a technology schematic into an RTL
> schematic is akin to turning object code into C.

I'm not interested in 'turning the object code back into C', I'm
interested in looking at the disassembly of the object code.

In other words, I just want to look at the low-level schematics.

> Sure, you can determine that a LUT with a INIT code of FFFE is an
> inverter.

Can you point me to a reference that shows how to do this?

> But
> it's hard determine that a collection of 80 LUTs is an adder, which
> you want to represent on an RTL schematic with a "+" sign, not as a
> bunch of XORs.  -Kevin

I actually don't care about determining that a bunch of LUTs actually
implement something else.  I just want to quickly peek at the RTL from
time to time to make sure that things are being generated as I expect.

I also want to be able to *easily* grab dumps of the RTL schematic for
inclusion in my website (http://www.harp-project.com/) for descriptive
purposes.  The only way I know how to get schematics 'to disk' using
ISE is with a screen capture, and that is totally inferior to being
able to dump a high-quality image.

thutt
-- 
Hoegaarden!

Article: 135343
Subject: Re: Is it possible to get an RTL netlist from Xilinx tools?
From: thutt <thutt151@comcast.net>
Date: 27 Sep 2008 09:33:32 -0700
Links: << >>  << T >>  << A >>
Kevin Neilson <kevin_neilson@removethiscomcast.net> writes:

> Brian Drummond wrote:
> > On 20 Sep 2008 09:48:58 -0700, thutt <thutt151@comcast.net> wrote:
> >
> >> Hello everyone,
> >>
> >> I'm tired of using the Xilinx ISE to look at RTL schematics, mainly
> >> because it's so slow and cumbersome to use.  What I'd like to do is
> >> output a netlist and use another script to process that netlist into
> >> input suitable for VCG (http://rw4.cs.uni-sb.de/~sander/html/gsvcg1.html).
> >>
> >> I have figured out how to get the Xilinx tools to output a netlist,
> >> but it appears they output a 'technology' version and not an 'RTL'
> >> version of the netlist.
> >
> >> Is there any way to get the tools to output a netlist that is in the
> >> more useful (to me) 'RTL' format?
> > Have you looked at the post-synthesis simulation model (in VHDL)?
> > - Brian
> The post-synthesis simulation model, if viewed in a different tool,
> would be about identical to a technology schematic, because all large
> structures have been resolved into primitives.
> -Kevin

Can you give me an example of the work-flow that will accomplish this?
What tools?  What output files from the Xilinx toolchain will be fed
into those 'different tools'?

thutt
http://www.harp-project.com/
-- 
Hoegaarden!

Article: 135344
Subject: Re: 50 Ohm Analog Output of FPGA
From: rickman <gnuarm@gmail.com>
Date: Sat, 27 Sep 2008 10:32:50 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Sep 23, 10:50=A0am, hei...@iname.com wrote:
> > The low pass filter will not give you a sine wave. =A0It will
> > approximate one, but with 20 MHz components on a 10 MHz signal, you
> > will not be able to fully filter it out.
>
> Is that because the FPGA will not generate a perfect square wave?
>
> I purchased a DAC but it seemed like a waste to send 8 bits at a time
> when there are really only two values. Anyway, thanks for the info.

Why can't an FPGA output a "perfect" sine wave?

It is because of the limitation of filters.  If you output a "perfect"
square wave at 10 MHz, you will have harmonics at 30 MHz, 50 MHz, etc
(ignore my 20 MHz comment above).  To get a good sine wave, you need
to reduce these harmonic levels by more than 20 dB and more like 40
dB.  To get 40 dB attenuation over a 3:1 frequency range requires one
heck of a good filter.  That is why delta-sigma converters sample at
much higher rates and interpolate or decimate.  By sampling at a freq
much higher than the signal frequency, filters can be designed to give
a better cut off.

Rick

Article: 135345
Subject: Re: Is it possible to get an RTL netlist from Xilinx tools?
From: doug <xx@xx.com>
Date: Sat, 27 Sep 2008 10:03:57 -0800
Links: << >>  << T >>  << A >>


thutt wrote:
> David R Brooks <davebXXX@iinet.net.au> writes:
> 
> 
>>thutt wrote:
>>
>>>Kevin Neilson <kevin_neilson@removethiscomcast.net> writes:
>>>
>>>
>>>>thutt wrote:
>>>>
>>>>>Hello everyone,
>>>>>I'm tired of using the Xilinx ISE to look at RTL schematics, mainly
> 
> 
> <snip>
> 
>>>If there were a lookup table of initialization values to the standard
>>>'and / or / xor'-type logic that each represents, then it would be
>>>quite possible to write a script to make an external RTL viewer based
>>>on the technology netlist output by the Xilinx tools.
>>>
>>>Anyone game for a little sleuthing to determine all the LUT init
>>>codes?
>>>
>>
>>It's a bit harder than that. Assume a LUT has 4 inputs (it varies
>>with different FPGA families). The synthesiser (eg XST) will look
>>for a piece of combinatorial logic with 4 inputs & 1 output. It then
>>simulates that logic, evaluating its output for all 16 possible
>>input combinations. The initialisation string is simply the map of
>>those 16 outputs, written out in hex.  So there really isn't an
>>underlying logic representation: the designer might have specified
>>any Boolean function whatever (adder, parity tree, etc): the
>>synthesiser doesn't care.
> 
> 
> I'm more confused now.  Somehow the LUT must correspond to some
> concrete logic, and each LUT must be reconfigurable, right?
> 
> How does the netlist indicate what logic is implemented in the LUT?
> If it's possible to determine that information, I'll be happy.
> 
A LUT is a 16x1 memory. It implements the truth table of the logic.
That is what is does.  The memory contents tell you the truth table
of the logic.  The details of the logic are unimportant, the output
is important.  That is what is implemented.  To make a schematic of
this, just draw a 16x1 memory and list the memory contents.


> thutt
> http://www.harp-project.com/

Article: 135346
Subject: Re: Is it possible to get an RTL netlist from Xilinx tools?
From: thutt <thutt151@comcast.net>
Date: 27 Sep 2008 11:09:27 -0700
Links: << >>  << T >>  << A >>
doug <xx@xx.com> writes:

> thutt wrote:
> > David R Brooks <davebXXX@iinet.net.au> writes:
> >
> >>thutt wrote:
> >>
> >>>Kevin Neilson <kevin_neilson@removethiscomcast.net> writes:
> >>>
> >>>
> >>>>thutt wrote:
> >>>>
> >>>>>Hello everyone,
> >>>>>I'm tired of using the Xilinx ISE to look at RTL schematics, mainly
> > <snip>
> >
> >>>If there were a lookup table of initialization values to the standard
> >>>'and / or / xor'-type logic that each represents, then it would be
> >>>quite possible to write a script to make an external RTL viewer based
> >>>on the technology netlist output by the Xilinx tools.
> >>>
> >>>Anyone game for a little sleuthing to determine all the LUT init
> >>>codes?
> >>>
> >>
> >>It's a bit harder than that. Assume a LUT has 4 inputs (it varies
> >>with different FPGA families). The synthesiser (eg XST) will look
> >>for a piece of combinatorial logic with 4 inputs & 1 output. It then
> >>simulates that logic, evaluating its output for all 16 possible
> >>input combinations. The initialisation string is simply the map of
> >>those 16 outputs, written out in hex.  So there really isn't an
> >>underlying logic representation: the designer might have specified
> >>any Boolean function whatever (adder, parity tree, etc): the
> >>synthesiser doesn't care.
> > I'm more confused now.  Somehow the LUT must correspond to some
> > concrete logic, and each LUT must be reconfigurable, right?
> > How does the netlist indicate what logic is implemented in the LUT?
> > If it's possible to determine that information, I'll be happy.
> >
> A LUT is a 16x1 memory. It implements the truth table of the logic.
> That is what is does.  The memory contents tell you the truth table
> of the logic.  

Your answer is not very descriptive.  *HOW* do the memory contents
tell you the truth table?

Can one determine the memory contents from the initialization string?

> The details of the logic are unimportant, the output is important.
> That is what is implemented.  

If you recall, my goal is only to produce a schematic of the RTL, so
the details of the logic *ARE* improtant to me.

> To make a schematic of this, just draw a 16x1 memory and list the
> memory contents.

Pray tell, how does one map the memory contents into the karnaugh map
or the logic equations?

thutt
http://www.harp-project.com/

-- 
Hoegaarden!

Article: 135347
Subject: Re: maximum clock rating
From: nico@puntnl.niks (Nico Coesel)
Date: Sat, 27 Sep 2008 19:00:29 GMT
Links: << >>  << T >>  << A >>
uraniumore238@gmail.com wrote:

>I am using a XILINX  Spartan II  XC2S50 board and would like to know
>what the maximum clock rating for the chip is and if I can clock it
>with an in-coming clock osc. at 150 MHz...

Downloading the datasheet from Xilinx is a good start. If you can
actually use this frequency depends on your design. I expect 150MHz is
a bit too much for a Spartan II to do something usefull.

-- 
Programmeren in Almere?
E-mail naar nico@nctdevpuntnl (punt=.)

Article: 135348
Subject: Re: OFDM band switch ...
From: "Kappa" <NO_SPAM_78kappa78@virgilio.it_NO_SPAM>
Date: Sat, 27 Sep 2008 21:02:36 +0200
Links: << >>  << T >>  << A >>

Hi Jerzy Gbur.

> I'm not sure I understood. But what I propose is, for narrow band set,
> for example [0 - 511 Data] [512 - 1535 Nul] [1536 - 2047 Data], It
> depend what band you exactly want.
> Lenght of IFFT transform should stay the same.

I still do not understand ... how can I reduce the number of carriers if I 
have to maintain a "Standard" ?

On DVB-T 2K mode the carriers is 1705. This carriers with IFFT 2048 obtain 
the time domain. If output data at the clock of 7/64e-6 obtain of 8 MHz 
band.

How clock with this sample at 1/8e-6 with same clock of 64 MHz for obtain a 
7 MHz of band ???

> IMHO it's most simple way to filfull your requirements.

How ???

Thanks.

Kappa. 



Article: 135349
Subject: Re: Is it possible to get an RTL netlist from Xilinx tools?
From: Muzaffer Kal <kal@dspia.com>
Date: Sat, 27 Sep 2008 12:05:04 -0700
Links: << >>  << T >>  << A >>
On 27 Sep 2008 11:09:27 -0700, thutt <thutt151@comcast.net> wrote:
>> To make a schematic of this, just draw a 16x1 memory and list the
>> memory contents.
>
>Pray tell, how does one map the memory contents into the karnaugh map
>or the logic equations?

A 16x1 can be represented by the following Verilog snippet:

reg out;
always @*

case (addr)
4'b0000: out = ?
4'b0001: out = ?
...
4'b1111: out = 
endcase

I'm hoping you can immediately see how to map this to a karnaugh map
of 2 rows by 2 columns: just use the address as the indeces and enter
the value of out in the respective location. Similarly for logic
functions you can decode all conditions which assign a 1 to out and
"or" all of the partial products i.e.

out = addr0 & !addr1 + !addr0&addr2 + ...

which has obviously exactly the same information in the case statement
and the karnaugh map.



Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search