Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 68375

Article: 68375
Subject: Re: signal names in modelsim
From: "Barry Brown" <barry_brown@remove_this.agilent.com>
Date: Fri, 2 Apr 2004 08:13:51 -0800
Links: << >>  << T >>  << A >>
In the wave window, click on Tools > Window Preferences, and set the
"Display Signal Path" to 1.  Then you can save your wave window setup by
clicking File > Save Format.

Barry Brown



"Frank van Eijkelenburg" <someone@work.com> wrote in message
news:406d746c$0$65165$d5255a0c@news.versatel.net...
> Hi,
>
> does anyone know if it's possible to display short signal names in the
wave
> window of modelsim. I have for instance a signal
> /gb_eth_tb/ethernet_mac/receive_state in my wave window. How can I get
> modelsim so far that it only displays the name receive_state in the wave
> window. I saw that you can give it a "display name", can this be done by a
> script?
>
> TIA,
> Frank
>
>



Article: 68376
Subject: Re: signal names in modelsim
From: PO Laprise <pl_N0SP4M_apri@cim._N0SP4M_mcgill.ca>
Date: Fri, 02 Apr 2004 16:25:04 GMT
Links: << >>  << T >>  << A >>
Frank van Eijkelenburg wrote:

> Hi,
> 
> does anyone know if it's possible to display short signal names in the wave
> window of modelsim. I have for instance a signal
> /gb_eth_tb/ethernet_mac/receive_state in my wave window. How can I get
> modelsim so far that it only displays the name receive_state in the wave
> window. I saw that you can give it a "display name", can this be done by a
> script?
> 
> TIA,
> Frank
> 

In the "wave" window, Tools -> Window Preferences -> Display Signal 
Path.  Set this to 1.  If you want this every time you start the 
project, change the "WaveSignalNameWidth" parameter in the mpf file to 
1, or if for all projects, set it in the modelsim.ini (if I remember the 
name correctly).  I'm not sure how (if possible) to set these parameters 
run-time with a script, but I imagine it's somewhere in the docs.

-- 
Pierre-Olivier

-- to email me directly, remove all _N0SP4M_ from my address --


Article: 68377
Subject: Re: vcom in modelsim
From: John Smith <user@example.net>
Date: Fri, 02 Apr 2004 19:28:37 +0300
Links: << >>  << T >>  << A >>


Frank van Eijkelenburg wrote:
> Hi,
> 
> is it possible to compile only the files that needs to be compiled. I have a
> script which compiles a set of vhdl files, but if I do a change in the last
> vhdl file in the list, only this file needs to be recompiled. Is there an
> option for vcom that gives this desired behaviour?

Try Googling for "modelsim vmake". vmake is a Modelsim command that 
generates a makefile. Haven't used it myself, but I know such feature 
exists.

See for example http://www.tkt.cs.tut.fi/software/modeltech/tutorial/

HTH
--
JS


Article: 68378
Subject: Re: ML300 and GigE Experiences
From: Matthew E Rosenthal <mer2@andrew.cmu.edu>
Date: Fri, 2 Apr 2004 12:36:32 -0500 (EST)
Links: << >>  << T >>  << A >>
Tony,
I used a HW gmac core on the ml300.  I believe we used a differential
clock input (62.5 *2 ) = 125 Mhz. Maybe you can use this clock instead.
This signal is provided on the ml300 board.  i dont have the docs in front of me
but I belive it comes in on either pins B13,C13 or B14,C14

My other experience with gmac core and corresponding reference designs are
VERY bad at best, and xilinx support in that area is no better.
maybe using the gig ports with the PPC is a little better but...

Matt

On Fri, 2 Apr 2004, Tony wrote:

> I am curious if anyone here has had success maintaining a very low BER
> link using the fiber connections on the ML300 boards.
>
> We have implemented an Aurora Protocol PLB Core for the ML300 (adding
> interface FIFO and FSMs to the Aurora CoreGen v2 core.  It is
> currently a single lane system using Gige-0 on the ml300 board (MGT
> X3Y1).  We were having small issues using the 156.25 bref clock so we
> are currently using a 100 MHz clock (we are just using the PLB clock
> plb_clk out of the Clock0 module on the EDK2 reference system).  Clock
> compensation occurs at about 2500 reference clocks. (tried 5000, same,
> if not worse problems).   Best results were with Diffswing=800mv,
> Pre-Em=33%.
>
> Unfortunately our link has problems staying up for more than 20
> minutes (it will spontaneously lose link and channel, until a
> mgt-reset on both partners kicks them off again).  Additionally, there
> are mass HARD and SOFT errors reported by the Aurora core.   I do not
> send any data, just let the Aurora core auto-idle.  This is the
> timing:
>
> DIFFSW=800 PREEM=33%   Stays up: 30+ minutes, ~5 soft errors/sec
> DIFFSW=700 PREEM=33%   Stays up: 30+ minutes, ~10 soft errors/sec
> DIFFSW=600 PREEM=33%   Stays up: not tested, ~20 soft errors/sec
> (explodes to 200-300 errors/sec at about 13 minutes)
> DIFFSW=500 PREEM=33%   Stays up: not tested, ~30 soft errors/sec
> (explodes to 200-300 errors/sec at about 13 minutes)
>
> DIFFSW=800 PREEM=25%   Stays up: not testeds, ~200-300 soft errors/sec
>
> - In loopback mode (serial or parallel) the channel/lane are crisp and
> clean as ever.
>
> - When the boards start up, the errors in each situation are small
> parts/second, but then grow over time.  I dont know if this is a
> function of board/chip temperature (i put a heat sink on and it seems
> to slow the increase of the error rate), or if for some reason the
> Aurora core cannot compensate for some clock skew and jitter
>
> -
>
>
> Could any of you guys steer me in the right direction?
>
> Is the higher loaded plb_clk as my ref_clk a source of problem?
> Anybody able to get low error rates?
>
> Thanks,
> Tony
>
>
>
>
>
>
>

Article: 68379
Subject: Re: vcom in modelsim
From: Chris <cgs-news56@cg983schneider.com>
Date: Fri, 02 Apr 2004 19:56:46 +0200
Links: << >>  << T >>  << A >>
John Smith wrote:

> Try Googling for "modelsim vmake". vmake is a Modelsim command that 
> generates a makefile. Haven't used it myself, but I know such feature 
> exists.

That's true.

1) remove your old library

2) compile sources with your script (compile not simulate)

3) vmake > Makefile ( ===> create a Makefile from Library)

4) every new compilation can be started with "make" (only changed files 
will be compiled)


BR, Chris


Article: 68380
Subject: Re: Help with Xilinx Ram16X1S example VHDL code
From: Brian Philofsky <brian.philofsky@no_xilinx_spam.com>
Date: Fri, 02 Apr 2004 11:06:13 -0700
Links: << >>  << T >>  << A >>


jtw wrote:
> See below for one potential issue:  the synthesis translate_off and
> translate_on directives should surround the simulation-only portion of the
> code.


This is old advice and actually, I would suggest you do not isolate the 
INITs with translate_off/translate_on's if you are using a recent 
version of XST or Synplicity (do not know about Mentor or Synopsys). By 
not using the synthesis translate function, the same INIT values will be 
used for both synthesis and simulation.  Otherwise, you would need to 
specify them twice and have the more likely chance of getting them out 
of sync with each other plus the fact that you will write more lines of 
code to do the same thing.

I think this problem likely stemmed from the fact the INIT was declared 
as a string in the component decalaration as Paulo identified which 
would cause binding errors in simulation as well as synthesis so using 
translate_off would not have corrected the problem.  The INIT should 
have been decalred as a bit_vector.  What would have been probably even 
better is to just not declare the component declaration at all and 
instead just let the:

library UNISIM;
use UNISIM.VComponents.all;

bind the component.  Again in the recent versions of the synthesis 
tools, you do not need to isolate the library declaration using 
translate_on/off's and it will use it in order to properly bind the 
components.  This makes for cleaner code in my opinion and limits the 
possibility of encountering issues like this one.


--  Brian





> 
> Jason
> 
> "Patrick Robin" <circaeng@hotmail.com> wrote in message
> news:1c77d44b.0403272022.24c1a7e@posting.google.com...
> 
>>Hello,
>>
>>I have been trying to use distributed ram on a Spartan 3. I get an
>>error from XST with with this simple example from the docs
>>
>>error:
>>
>>Analyzing Entity <RAM16x1S> (Architecture <low_level_definition>).
>>ERROR:Xst:764 - E:/data/f100/f100mb/dutyRAM16.vhd line 48: No default
>>binding for component: <RAM16X1S>. Generic <INIT> is not on the
>>entity.
>>
>>
>>
>>-- This example shows how to create a
>>-- RAM using xilinx RAM16x1S component.
>>library IEEE;
>>use IEEE.std_logic_1164.all;
>>
> 
>                                       -- synthesis translate_off
> 
>>library UNISIM;
>>use UNISIM.VComponents.all;
> 
>                                      -- synthesis translate_on
> 
>>entity myRAM is
>>  port (
>>        o   : out std_logic;
>>        we  : in std_logic;
>>        clk : in std_logic;
>>        d   : in std_logic;
>>        a0,a1,a2,a3   : in std_logic
>>        );
>>end myRAM;
>>architecture xilinx of myRAM is
>>  component RAM16x1S is
>>    generic (INIT : string := "0000");
>>    port (
>>        O : out std_logic;
>>        D : in std_logic;
>>        A3, A2, A1, A0 : in std_logic;
>>        WE, WCLK : in std_logic
>>        );
>>  end component;
>>  begin
>>    U0 : RAM16x1S
>>    generic map (INIT => "FFFF")
>>    port map (O => o, WE => we, WCLK => clk, D => d,
>>                A0 => a0, A1 => a1, A2 => a2, A3 => a3);
>>
>>end xilinx;
>>
>>--------------------------------------------------------------------
>>
>>
>>Am I missing something?
>>
>>Thanks
>>
>>  Patrick
>>
> 
> 
> 


Article: 68381
Subject: Re: ML300 and GigE Experiences
From: Austin Lesea <austin@xilinx.com>
Date: Fri, 02 Apr 2004 10:10:58 -0800
Links: << >>  << T >>  << A >>
Matt,

Do you have a case number?

I like to follow up on any less than happy experiences so that we can do 
better.

Did you have an FAE visit?  Did you visit a RocketLab?

Please reply to me directly, (austin (at) xilinx.com)

There is a case now open for Tony (yup, it took that long), and we are 
zeroing in on his issue.

Thank you,

Austin

Article: 68382
Subject: Re: ML300 and GigE Experiences
From: Tony <tonym_98@nospam_hotmail.com>
Date: Fri, 02 Apr 2004 18:37:55 GMT
Links: << >>  << T >>  << A >>
Thanks Austin!

The hotline is getting back to me today or monday with respect to the
MGT gbps ability for our boards.   

1) our clock is probably dirty.  It is the initial DCM output that
goes to the plb_clk of the reference design.  I noticed the DDR clock
is fed from another DCM that de-skews and cleans up the first DCM, so
I will do a quick switch to that to see if there is improvment.  I am
more and more convinced that the dedicated 156.25 BREF clock going
straight to the MGT is the cleanest signal, and will also give that a
try.  I have to get a scope from the other lab to test the 0->1 jitter
characteristics.

2) I am using the Aurora core  v2 from coregen so I am comfortable
saying the fabric is stable.  These errors occur when idling (no
pdu/nfc/ufc's), so it is not a sychronization problem with the aurora
core. 

3) I havent yet developed a test for this.  Right now we are picking
off falling edge HARD_ERROR, SOFT_ERROR, and FRAME_ERROR signals from
the Aurora core, and generating interrupts to the PPC405 core which
then prints to the screen every 100 interrupts, so there is
significant delay, but more than sufficient to gather error rate
statistics in the ~100/sec range.

4) Fiber, the ones that come with the Ml300 Kit.

5) I have to get a scope from the other lab to test this.

6) Far end loopback?  Do you mean the serial-mode loop back where it
goes to the pads?  Yes that works flawlessly.

7) I was planning a trip just to check out the labs anyway, should be
fun!

I'll reply with the result of switching to the DDR 100 MHz clock, and
the 156.25 MHz clock.

Regards,
Tony

On Fri, 02 Apr 2004 08:03:20 -0800, Austin Lesea <austin@xilinx.com>
wrote:

>Tony,
>
>A well designed link should be error free (ie many many hours without a 
>single bit in error).  Contact the hotline for details about MGT support 
>on specific ML300 series boards:  some early versions were not designed 
>for supporting links above 1 Gbs! as they were designed to show off the 
>405PPC(tm IBM) instead.
>
>So, there is a hundred things to check once you find out if your board 
>was built for MGT usage, but you have to start somewhere:
>
>1) is your refclk meeting the jitter spec?  The MGTs require a very low 
>jitter refclk.  You can check this by observing a 1,0 pattern from the 
>outputs of the MGTs and seeing how much jitter is there.  Should be much 
>less than 10% of a unit interval (bit period).  If it is more than this, 
>you have a tx jitter problem.  If you loop with a bad jitter rx clock, 
>everything is OK because the receiver is getting exactly the same bad 
>clock to work with.
>
>2) is your logic error free when looped back?  I think you said yes, but 
>often timing constraints may be missing, and the fabric is the source of 
>errors.
>
>3) are your errors in burts?  or single?  Bursts may indicate FIFO 
>overflow/underflow (refclks far apart in frequency, and no means to deal 
>with it, or the means is not working in logic -- when looped, the same 
>clock is used, so no problem).
>
>4) what is the channel?  coax cables are not a differential channel, 
>common mode noise will roar right into the receiver if the channel is 
>not differential.  Usually the coax's are used to connect the TX and RX 
>  pairs to a XAUI adapter module to the actual backplane (still not 
>ideal, but at least most of the channel is differential).
>
>5) what does the received eye pattern look like?  This will tell you if 
>you have a jitter problem, or an amplitude/loss problem.  If the eye 
>looks fantastic, that takes you right back to the digital processing, 
>and takes away the analog side of things again....
>
>6) have you tried a far end loopback?  Loop the digital data directly 
>back to the far end tx from the far end rx  to go back to the near end.
>
>7) contact an FAE, and arrange to go to one of our 15 world wide 
>RocketLabs(tm) locations where we have all of the equipment and 
>resources to debug your board, and compare it with our own boards and 
>designs in the labs.
>
>Austin
>
>Tony wrote:
>
>> I am curious if anyone here has had success maintaining a very low BER
>> link using the fiber connections on the ML300 boards.  
>> 
>> We have implemented an Aurora Protocol PLB Core for the ML300 (adding
>> interface FIFO and FSMs to the Aurora CoreGen v2 core.  It is
>> currently a single lane system using Gige-0 on the ml300 board (MGT
>> X3Y1).  We were having small issues using the 156.25 bref clock so we
>> are currently using a 100 MHz clock (we are just using the PLB clock
>> plb_clk out of the Clock0 module on the EDK2 reference system).  Clock
>> compensation occurs at about 2500 reference clocks. (tried 5000, same,
>> if not worse problems).   Best results were with Diffswing=800mv,
>> Pre-Em=33%.
>> 
>> Unfortunately our link has problems staying up for more than 20
>> minutes (it will spontaneously lose link and channel, until a
>> mgt-reset on both partners kicks them off again).  Additionally, there
>> are mass HARD and SOFT errors reported by the Aurora core.   I do not
>> send any data, just let the Aurora core auto-idle.  This is the
>> timing:
>> 
>> DIFFSW=800 PREEM=33%   Stays up: 30+ minutes, ~5 soft errors/sec
>> DIFFSW=700 PREEM=33%   Stays up: 30+ minutes, ~10 soft errors/sec
>> DIFFSW=600 PREEM=33%   Stays up: not tested, ~20 soft errors/sec
>> (explodes to 200-300 errors/sec at about 13 minutes)
>> DIFFSW=500 PREEM=33%   Stays up: not tested, ~30 soft errors/sec
>> (explodes to 200-300 errors/sec at about 13 minutes)
>> 
>> DIFFSW=800 PREEM=25%   Stays up: not testeds, ~200-300 soft errors/sec
>> 
>> - In loopback mode (serial or parallel) the channel/lane are crisp and
>> clean as ever.  
>> 
>> - When the boards start up, the errors in each situation are small
>> parts/second, but then grow over time.  I dont know if this is a
>> function of board/chip temperature (i put a heat sink on and it seems
>> to slow the increase of the error rate), or if for some reason the
>> Aurora core cannot compensate for some clock skew and jitter
>> 
>> -
>> 
>> 
>> Could any of you guys steer me in the right direction?
>> 
>> Is the higher loaded plb_clk as my ref_clk a source of problem?
>> Anybody able to get low error rates?
>> 
>> Thanks,
>> Tony
>> 
>> 
>> 
>> 
>> 
>> 


Article: 68383
Subject: Verifying multi-cyclicity of multi-cycle paths
From: PO Laprise <pl_N0SP4M_apri@cim._N0SP4M_mcgill.ca>
Date: Fri, 02 Apr 2004 18:39:57 GMT
Links: << >>  << T >>  << A >>
   Ok, here's a small variation on a previous post.  My design is giving 
the wrong results in post-PAR simulation (no SDF errors or warnings, no 
timing violations in PAR), and I want to verify whether my multi-cycle 
paths are to blame.  I would prefer to check this using behavioural code 
rather than the post-PAR model, because the latter takes days to run my 
complete testbench.  How would you go about this?  Post-PAR suggestions 
are also welcome, as are any other ideas as to the possible causes of 
such a discrepancy.  Thanks in advance...

-- 
Pierre-Olivier

-- to email me directly, remove all _N0SP4M_ from my address --


Article: 68384
Subject: Re: ML300 and GigE Experiences
From: Tony <tonym_98@nospam_hotmail.com>
Date: Fri, 02 Apr 2004 18:45:09 GMT
Links: << >>  << T >>  << A >>
I should say response time has been extremely fast and the people I
spoke with were great to work with.  I called the hotline and they
opened up a case.  (Austin, I am not sure if this is the same case,
but I left your email and name with them).   I havent used the GigE
core but the PLB interface version  seems very clean cut.


Regrds,
Tony

On Fri, 02 Apr 2004 10:10:58 -0800, Austin Lesea <austin@xilinx.com>
wrote:

>Matt,
>
>Do you have a case number?
>
>I like to follow up on any less than happy experiences so that we can do 
>better.
>
>Did you have an FAE visit?  Did you visit a RocketLab?
>
>Please reply to me directly, (austin (at) xilinx.com)
>
>There is a case now open for Tony (yup, it took that long), and we are 
>zeroing in on his issue.
>
>Thank you,
>
>Austin


Article: 68385
Subject: Re: ML300 and GigE Experiences
From: Tony <tonym_98@nospam_hotmail.com>
Date: Fri, 02 Apr 2004 18:46:51 GMT
Links: << >>  << T >>  << A >>
Matt,

The Ml300 also supplies a 156.25 differential clock, but if that gives
problems, the direct diff clock at 125 MHz would indeed be a step in
the right direction.  Thanks for the info!

Tony

On Fri, 2 Apr 2004 12:36:32 -0500 (EST), Matthew E Rosenthal
<mer2@andrew.cmu.edu> wrote:

>Tony,
>I used a HW gmac core on the ml300.  I believe we used a differential
>clock input (62.5 *2 ) = 125 Mhz. Maybe you can use this clock instead.
>This signal is provided on the ml300 board.  i dont have the docs in front of me
>but I belive it comes in on either pins B13,C13 or B14,C14
>
>My other experience with gmac core and corresponding reference designs are
>VERY bad at best, and xilinx support in that area is no better.
>maybe using the gig ports with the PPC is a little better but...
>
>Matt
>
>On Fri, 2 Apr 2004, Tony wrote:
>
>> I am curious if anyone here has had success maintaining a very low BER
>> link using the fiber connections on the ML300 boards.
>>
>> We have implemented an Aurora Protocol PLB Core for the ML300 (adding
>> interface FIFO and FSMs to the Aurora CoreGen v2 core.  It is
>> currently a single lane system using Gige-0 on the ml300 board (MGT
>> X3Y1).  We were having small issues using the 156.25 bref clock so we
>> are currently using a 100 MHz clock (we are just using the PLB clock
>> plb_clk out of the Clock0 module on the EDK2 reference system).  Clock
>> compensation occurs at about 2500 reference clocks. (tried 5000, same,
>> if not worse problems).   Best results were with Diffswing=800mv,
>> Pre-Em=33%.
>>
>> Unfortunately our link has problems staying up for more than 20
>> minutes (it will spontaneously lose link and channel, until a
>> mgt-reset on both partners kicks them off again).  Additionally, there
>> are mass HARD and SOFT errors reported by the Aurora core.   I do not
>> send any data, just let the Aurora core auto-idle.  This is the
>> timing:
>>
>> DIFFSW=800 PREEM=33%   Stays up: 30+ minutes, ~5 soft errors/sec
>> DIFFSW=700 PREEM=33%   Stays up: 30+ minutes, ~10 soft errors/sec
>> DIFFSW=600 PREEM=33%   Stays up: not tested, ~20 soft errors/sec
>> (explodes to 200-300 errors/sec at about 13 minutes)
>> DIFFSW=500 PREEM=33%   Stays up: not tested, ~30 soft errors/sec
>> (explodes to 200-300 errors/sec at about 13 minutes)
>>
>> DIFFSW=800 PREEM=25%   Stays up: not testeds, ~200-300 soft errors/sec
>>
>> - In loopback mode (serial or parallel) the channel/lane are crisp and
>> clean as ever.
>>
>> - When the boards start up, the errors in each situation are small
>> parts/second, but then grow over time.  I dont know if this is a
>> function of board/chip temperature (i put a heat sink on and it seems
>> to slow the increase of the error rate), or if for some reason the
>> Aurora core cannot compensate for some clock skew and jitter
>>
>> -
>>
>>
>> Could any of you guys steer me in the right direction?
>>
>> Is the higher loaded plb_clk as my ref_clk a source of problem?
>> Anybody able to get low error rates?
>>
>> Thanks,
>> Tony
>>
>>
>>
>>
>>
>>
>>


Article: 68386
Subject: Re: PCI development kit
From: Dwayne Surdu-Miller <miller@SEDsystems.nospam.ca>
Date: Fri, 02 Apr 2004 13:08:45 -0600
Links: << >>  << T >>  << A >>
Hi Subhek,

Here's a link to some fairly modest Virtex II / PCI development boards

http://www.dinigroup.com/

Best regards,
Dwayne Surdu-Miller


Article: 68387
Subject: vertex II vs Stratix
From: agwsu@yahoo.com (Andy)
Date: 2 Apr 2004 11:09:14 -0800
Links: << >>  << T >>  << A >>
Hi everybody, Could you people help me choose between Altera's Stratix
and Xilinx Vertex II...also as how to analyze the datasheet to
conclude the pros and cons of both the architectures?
thanks
-andy

Article: 68388
Subject: FPGA input
From: iinu@juno.com (ty)
Date: 2 Apr 2004 11:09:34 -0800
Links: << >>  << T >>  << A >>
I am working on a FIR/IIR project, where I am using the Xilinx Spartan
2E FPGA. I am fairly new to the FPGA world and to VHDL. In this
project I will be given 4 sets of coefficients in which I have to run
through the filters to determine the type of filter it is. My question
is, is there a way to read in these coefficient files? Currently, I
input the coefficients manually into the vhdl code.

Article: 68389
Subject: Re: ML300 and GigE Experiences
From: Austin Lesea <austin@xilinx.com>
Date: Fri, 02 Apr 2004 11:56:51 -0800
Links: << >>  << T >>  << A >>
Tony,

See my comments below:

Austin

Tony wrote:
> Thanks Austin!

You're welcome.  Happy to help out.

> 
> The hotline is getting back to me today or monday with respect to the
> MGT gbps ability for our boards. 

They contacted me, and I gave them the right names who know this stuff. 
  I may not be that smart, but at least I know who is!

> 
> 1) our clock is probably dirty.  It is the initial DCM output that
> goes to the plb_clk of the reference design.  I noticed the DDR clock
> is fed from another DCM that de-skews and cleans up the first DCM, so
> I will do a quick switch to that to see if there is improvment.  I am
> more and more convinced that the dedicated 156.25 BREF clock going
> straight to the MGT is the cleanest signal, and will also give that a
> try.  I have to get a scope from the other lab to test the 0->1 jitter
> characteristics.

Driving the MGT fom the DCM definitely does not meet the MGT clock input 
specifications at 2.5Gbs and higher.  We have heard that some folks can 
do this without trouble at 622 Mbs and 1 Gbs, but it still is not 
recommended.  Driving it from two in tandem is even worse.

> 
> 2) I am using the Aurora core  v2 from coregen so I am comfortable
> saying the fabric is stable.  These errors occur when idling (no
> pdu/nfc/ufc's), so it is not a sychronization problem with the aurora
> core. 
> 

Sounds good.

> 3) I havent yet developed a test for this.  Right now we are picking
> off falling edge HARD_ERROR, SOFT_ERROR, and FRAME_ERROR signals from
> the Aurora core, and generating interrupts to the PPC405 core which
> then prints to the screen every 100 interrupts, so there is
> significant delay, but more than sufficient to gather error rate
> statistics in the ~100/sec range.

Have you thought of using the XBERT design for link characterization? 
If you are getting lost frame indications, then that is something far 
worse than a few bit errors......

> 
> 4) Fiber, the ones that come with the Ml300 Kit.

OK

> 
> 5) I have to get a scope from the other lab to test this.

OK

> 
> 6) Far end loopback?  Do you mean the serial-mode loop back where it
> goes to the pads?  Yes that works flawlessly.

No, I was thinking of looping it back at the far end receive digital end 
to go back towards the receiver, but I do not think you need to do this.

> 
> 7) I was planning a trip just to check out the labs anyway, should be
> fun!

Yes, we have a lot of fun.  Since the equipment is there 24 X 7, the 
FAEs get to play with it, and they get proficient with it.  Time gets 
saved because set up is sometimes the hardest part of any verification 
or measurement.  Knowing the equipment, and the setup, and using it 
benefits everyone.


> 
> I'll reply with the result of switching to the DDR 100 MHz clock, and
> the 156.25 MHz clock.

If that is easy, that might have a real big benefit.

> 
> Regards,
> Tony
> 
> On Fri, 02 Apr 2004 08:03:20 -0800, Austin Lesea <austin@xilinx.com>
> wrote:
> 
> 
>>Tony,
>>
>>A well designed link should be error free (ie many many hours without a 
>>single bit in error).  Contact the hotline for details about MGT support 
>>on specific ML300 series boards:  some early versions were not designed 
>>for supporting links above 1 Gbs! as they were designed to show off the 
>>405PPC(tm IBM) instead.
>>
>>So, there is a hundred things to check once you find out if your board 
>>was built for MGT usage, but you have to start somewhere:
>>
>>1) is your refclk meeting the jitter spec?  The MGTs require a very low 
>>jitter refclk.  You can check this by observing a 1,0 pattern from the 
>>outputs of the MGTs and seeing how much jitter is there.  Should be much 
>>less than 10% of a unit interval (bit period).  If it is more than this, 
>>you have a tx jitter problem.  If you loop with a bad jitter rx clock, 
>>everything is OK because the receiver is getting exactly the same bad 
>>clock to work with.
>>
>>2) is your logic error free when looped back?  I think you said yes, but 
>>often timing constraints may be missing, and the fabric is the source of 
>>errors.
>>
>>3) are your errors in burts?  or single?  Bursts may indicate FIFO 
>>overflow/underflow (refclks far apart in frequency, and no means to deal 
>>with it, or the means is not working in logic -- when looped, the same 
>>clock is used, so no problem).
>>
>>4) what is the channel?  coax cables are not a differential channel, 
>>common mode noise will roar right into the receiver if the channel is 
>>not differential.  Usually the coax's are used to connect the TX and RX 
>> pairs to a XAUI adapter module to the actual backplane (still not 
>>ideal, but at least most of the channel is differential).
>>
>>5) what does the received eye pattern look like?  This will tell you if 
>>you have a jitter problem, or an amplitude/loss problem.  If the eye 
>>looks fantastic, that takes you right back to the digital processing, 
>>and takes away the analog side of things again....
>>
>>6) have you tried a far end loopback?  Loop the digital data directly 
>>back to the far end tx from the far end rx  to go back to the near end.
>>
>>7) contact an FAE, and arrange to go to one of our 15 world wide 
>>RocketLabs(tm) locations where we have all of the equipment and 
>>resources to debug your board, and compare it with our own boards and 
>>designs in the labs.
>>
>>Austin
>>
>>Tony wrote:
>>
>>
>>>I am curious if anyone here has had success maintaining a very low BER
>>>link using the fiber connections on the ML300 boards.  
>>>
>>>We have implemented an Aurora Protocol PLB Core for the ML300 (adding
>>>interface FIFO and FSMs to the Aurora CoreGen v2 core.  It is
>>>currently a single lane system using Gige-0 on the ml300 board (MGT
>>>X3Y1).  We were having small issues using the 156.25 bref clock so we
>>>are currently using a 100 MHz clock (we are just using the PLB clock
>>>plb_clk out of the Clock0 module on the EDK2 reference system).  Clock
>>>compensation occurs at about 2500 reference clocks. (tried 5000, same,
>>>if not worse problems).   Best results were with Diffswing=800mv,
>>>Pre-Em=33%.
>>>
>>>Unfortunately our link has problems staying up for more than 20
>>>minutes (it will spontaneously lose link and channel, until a
>>>mgt-reset on both partners kicks them off again).  Additionally, there
>>>are mass HARD and SOFT errors reported by the Aurora core.   I do not
>>>send any data, just let the Aurora core auto-idle.  This is the
>>>timing:
>>>
>>>DIFFSW=800 PREEM=33%   Stays up: 30+ minutes, ~5 soft errors/sec
>>>DIFFSW=700 PREEM=33%   Stays up: 30+ minutes, ~10 soft errors/sec
>>>DIFFSW=600 PREEM=33%   Stays up: not tested, ~20 soft errors/sec
>>>(explodes to 200-300 errors/sec at about 13 minutes)
>>>DIFFSW=500 PREEM=33%   Stays up: not tested, ~30 soft errors/sec
>>>(explodes to 200-300 errors/sec at about 13 minutes)
>>>
>>>DIFFSW=800 PREEM=25%   Stays up: not testeds, ~200-300 soft errors/sec
>>>
>>>- In loopback mode (serial or parallel) the channel/lane are crisp and
>>>clean as ever.  
>>>
>>>- When the boards start up, the errors in each situation are small
>>>parts/second, but then grow over time.  I dont know if this is a
>>>function of board/chip temperature (i put a heat sink on and it seems
>>>to slow the increase of the error rate), or if for some reason the
>>>Aurora core cannot compensate for some clock skew and jitter
>>>
>>>-
>>>
>>>
>>>Could any of you guys steer me in the right direction?
>>>
>>>Is the higher loaded plb_clk as my ref_clk a source of problem?
>>>Anybody able to get low error rates?
>>>
>>>Thanks,
>>>Tony
>>>
>>>
>>>
>>>
>>>
>>>
> 
> 

Article: 68390
Subject: Re: vertex II vs Stratix
From: Austin Lesea <austin@xilinx.com>
Date: Fri, 02 Apr 2004 12:01:43 -0800
Links: << >>  << T >>  << A >>
Andy,

What are you trying to do?

It is very hard to compare something when no one knows what the 
application is.

For example, if you need embedded ultra low power 32 bit RISC >400 MIPs 
processors, there is no choice other than VII Pro with up to two IBM 
405PPC (tm).

Or, if you need twenty 10 Gbs interfaces, there is no choice other than 
the 2VP70X.

And we misspell it Virtex(tm) on purpose so we could trademark the name.

Austin

Andy wrote:

> Hi everybody, Could you people help me choose between Altera's Stratix
> and Xilinx Vertex II...also as how to analyze the datasheet to
> conclude the pros and cons of both the architectures?
> thanks
> -andy

Article: 68391
Subject: Re: Metastablility
From: Brian Philofsky <brian.philofsky@no_xilinx_spam.com>
Date: Fri, 02 Apr 2004 14:30:05 -0700
Links: << >>  << T >>  << A >>


I felt I should chime into this thread as I am responsible for the 
Xilinx simulation models and at the same time I have a history of 
metastability testing with Peter (the expert in metastability) to draw 
from.  I have put some thought into the very subject of being able to 
simulate metastability in a timing simulation and there are a few issues 
that are not very easy to overcome that needs to be addressed to ever 
make this a reality.  First off, as Peter mentioned, the actual time 
that any measurable metastable event would happen is a tiny time slice 
out of the setup/hold window.  The modeled setup and hold window for 
timing analysis is artificially much bigger than what is really seen in 
the silicon due to uncertainties not only in the actual size of the 
window but also in the routing delays for the data and clock paths to 
the FF.  To model metastability delays anywhere in the setup/hold window 
would be a very pessimistic way to look at the problem and you can not 
guess where in the window it could really happen, in the beginning, 
middle or end of the window.  The second issue is how to determine how 
long the FF goes metastable.  We could use the statistical probabilities 
and random numbers to calculate resolution times but it would not 
necessarily show reality (it shows probability) which would be the goal 
of doing this and at the same time it would significantly slow down an 
already slow timing simulation since random numbers will need to be 
generated and applied to a probability equation at every FF to make this 
happen.  This would make the FF elements more complicated and most 
likely add confusion to the interpreting the model (why does it go X for 
2 ns this time but 200 ps the next run?).  Finally, when a setup/hold 
violation happens, the simulation value goes to X to indicate it does 
not know the end value.  This is partially due to metastability but more 
due to the fact that during the violation, the value may have changed or 
not and that is not known.  Producing X's in the model is the only way 
for the simulator to tell you that it am driving a value but it does not 
know what it is and because of this, there is no easy way to distinguish 
between being in a metastable state and simply not knowing the FF 
toggled or not.  Without being able to represent this, there is no easy 
way to model that the FF is in a metastable state for some time after 
the violation and then goes to a "good value" later but we are not sure 
which value it is.

So does this mean that all circuits that have asynchronous interfaces 
are doomed.  No.  What this means is that you must use good design 
practices in lieu of good simulation models to combat the effects of 
metastability.  Really, metastability is only a big problem in two 
scenarios, if you have a metastable FF feeding many other FFs, or you 
are trying to collect multiple signals (a bus) through an asynchronous 
interface.  The best way to alleviate bused data signals crossing clock 
domains is to use an asynchronous FIFO which will safely collect and 
transfer the data from one clock domain to the other.  For the case of 
control and other singe bit signals that must cross clock domains, you 
need to double register the data so that the metastable register will 
only feed one other FF that has a fairly short data path to give maximum 
settling time.  Since the suject is about simulating, for timing 
simulation with Xilinx devices, you should put the attribute 
ASYNC_REG=TRUE so that the synchronizing FF will not go to X when a 
timing violation occurs but instead keep its last value so that you can 
still perform a timing simulation of the interface.  Properly designing 
for metastability lessens the need for accurate simulation models to 
model metastability.  Both of these techniques are well documented all 
over the Xilinx web site.

Hopefully this adds some more insight to this subject.  It would be very 
interesting to have models that show this behavior but the reality is it 
is not likely to happen any time soon.


--  Brian


Muthu wrote:
> Hi,
> 
> Is it possible to simulate Metastability? Not in the functional
> simulation. But in Gate level Netlist simulation.
> 
> Regards,
> Muthu


Article: 68392
Subject: Re: vertex II vs Stratix
From: Peter Alfke <peter@xilinx.com>
Date: Fri, 02 Apr 2004 13:31:30 -0800
Links: << >>  << T >>  << A >>
How do you choose between a Toyota and a Honda, or between a Mercedes and a
BMW ? You have to study the documentation, and get beyond the common
features, then evaluate the differences, and find out what's most important
tofor you.
You might also read the marketing literature, but you have to realize that
marketeers will only dwell on the positive aspects, and they might
occasionally even stoop so low to exaggerate...
It's Application's never-ending job to keep them honest.
Peter Alfke
======================================
> From: agwsu@yahoo.com (Andy)
> Organization: http://groups.google.com
> Newsgroups: comp.arch.fpga
> Date: 2 Apr 2004 11:09:14 -0800
> Subject: vertex II vs Stratix
> 
> Hi everybody, Could you people help me choose between Altera's Stratix
> and Xilinx Vertex II...also as how to analyze the datasheet to
> conclude the pros and cons of both the architectures?
> thanks
> -andy


Article: 68393
Subject: Re: AHDL, VERILOG or VHDL??
From: "Nial Stewart" <nial@nialstewartdevelopments.co.uk>
Date: Fri, 2 Apr 2004 22:54:20 +0100
Links: << >>  << T >>  << A >>

"Hendra Gunawan" <u1000393@email.sjsu.edu> wrote in message
news:c4ib4l$6hud0$1@hades.csu.net...

> Which means unlike Xilinx, Altera does not provide the free version of
> ModelSim.


Ahhh, but in marketing speak it does. If you buy a subscription
for the Altera software you get ModelsimAE thrown in free :-)

BTW the ModelsimAE is slightly slower than the full version
but not as knobbled as ModelsimXE.


Nial



Article: 68394
Subject: Re: Verifying multi-cyclicity of multi-cycle paths
From: Brian Philofsky <brian.philofsky@no_xilinx_spam.com>
Date: Fri, 02 Apr 2004 14:57:42 -0700
Links: << >>  << T >>  << A >>


Have you thought about performing a timing simulation on just the part 
of the design in question?  If the multi-cycle paths in question are 
contained within a single hierarchy (good design practice) or a few 
levels of hierarchy in the design, then that portion of the design can 
be separated from the rest of the design for a timing simulation if you 
retain that hierarchy of the design during implementation.  To do this, 
you need to define the portions of hierarchy you wish to keep for 
verification by telling your synthesis tool to maintain the hierarchy 
for that instance (how this is done is different for each synthesis 
tool) and then you need to put a KEEP_HIERARCHY=TRUE attribute on that 
hierarchy as well in order to tell the P&R tools to maintain those 
boundaries as well.  If you do this, when you generate a post-PAR 
simulation model, you can use the -mhf switch (Generate Multiple 
Hierarchical Netlist Files for the GUI) and generate separate smaller 
gate-level netlists and SDF files for each level of hierarchy you asked 
to maintain.  This methodology can be a very useful in your situation 
and can cut your simulation times to a fraction of the time you spent 
simulating the entire design however you must have good hierarchy to 
begin with (not have timing paths that span many levels of hierarchy) 
and you must implement the design using this methodology.  You do not 
need to specify to KEEP_HIERRACHY on every hierarchical block in your 
design and many times you do not want to but you should do so on blocks 
that can assist in verification.  There are several references to this 
in the Xilinx docs and web site.  Search for KEEP_HIERACHY and you 
should be able to find out more information on this if you are interested.

--  Brian


PO Laprise wrote:

>   Ok, here's a small variation on a previous post.  My design is giving 
> the wrong results in post-PAR simulation (no SDF errors or warnings, no 
> timing violations in PAR), and I want to verify whether my multi-cycle 
> paths are to blame.  I would prefer to check this using behavioural code 
> rather than the post-PAR model, because the latter takes days to run my 
> complete testbench.  How would you go about this?  Post-PAR suggestions 
> are also welcome, as are any other ideas as to the possible causes of 
> such a discrepancy.  Thanks in advance...
> 


Article: 68395
Subject: Logic required for multiplication
From: spanchag@yahoo.com (spanchag)
Date: 2 Apr 2004 15:23:01 -0800
Links: << >>  << T >>  << A >>
Hi

I would like to find out if there is some sort of an equation where we
give the size of the two inputs and it tells us how many flip flops it
is going to use to implement the multiplier function.

(It may be specific to the architecture of the chip but a rough
estimate would do)

Thanks,

Article: 68396
Subject: Re: Logic required for multiplication
From: Peter Alfke <peter@xilinx.com>
Date: Fri, 02 Apr 2004 16:10:27 -0800
Links: << >>  << T >>  << A >>
In the newer FPGAs (like Virtex-II etc) you get ready-made multipliers that
do not "cost" any flip-flops, but you can use pipeline flip-flops inside for
free.
For multiplier-intensive applications, you ask the wrong question.
Multiplication is often implemented as a combinatorial function, without any
flip-flops, except for pipelining.
You can of course do it sequntially, and there is an endless number of
option. But for speed reasons, most users prefer the "hard" combinatorial
multipliers in the newer chipc.
Peter Alfke

> From: spanchag@yahoo.com (spanchag)
> Organization: http://groups.google.com
> Newsgroups: comp.arch.fpga
> Date: 2 Apr 2004 15:23:01 -0800
> Subject: Logic required for multiplication
> 
> Hi
> 
> I would like to find out if there is some sort of an equation where we
> give the size of the two inputs and it tells us how many flip flops it
> is going to use to implement the multiplier function.
> 
> (It may be specific to the architecture of the chip but a rough
> estimate would do)
> 
> Thanks,


Article: 68397
Subject: Re: Metastablility
From: glen herrmannsfeldt <gah@ugcs.caltech.edu>
Date: Sat, 03 Apr 2004 00:18:21 GMT
Links: << >>  << T >>  << A >>
Peter Alfke wrote:


> Metastability occurs in daily life: at a 4-way intersection, in any
> situation where conflicting inputs must be resolved and the arguments for
> both sides are perfectly balanced. "Take the right road or the left road"
> and while arguing about the advantages, you end up in the ditch....
> Let Condelezza testify or not...

Once driving out in the middle of nowhere (cornfields in Illinois)
three cars came to a four way stop at (almost) exactly the same
time.  It took a long time to figure out who should go first.

It does make a good, everyday life, example...

-- glen


Article: 68398
Subject: Re: Metastablility
From: glen herrmannsfeldt <gah@ugcs.caltech.edu>
Date: Sat, 03 Apr 2004 00:33:12 GMT
Links: << >>  << T >>  << A >>
Brian Philofsky wrote:


> I felt I should chime into this thread as I am responsible for the 
> Xilinx simulation models and at the same time I have a history of 
> metastability testing with Peter (the expert in metastability) to draw 
> from.  I have put some thought into the very subject of being able to 
> simulate metastability in a timing simulation and there are a few issues 
> that are not very easy to overcome that needs to be addressed to ever 
> make this a reality.  First off, as Peter mentioned, the actual time 
> that any measurable metastable event would happen is a tiny time slice 
> out of the setup/hold window.  

(big snip)

Say one build a flip-flop out of CLB logic, maybe just to use
to demonstrate metastability.  Would that have a significantly
wider window and/or longer resolution time?

-- glen


Article: 68399
Subject: Re: rs232 interface on nios
From: glen herrmannsfeldt <gah@ugcs.caltech.edu>
Date: Sat, 03 Apr 2004 00:38:38 GMT
Links: << >>  << T >>  << A >>
Ray Andraka wrote:

> Use one of those RS232 level translators and do it right.  They are little more
> than special inverters.  Some have a charge pump built in so that they operate
> off a 5v supply.  Costs are pretty low.  Check Maxim.

As far as I remember RS232 inputs and outputs are supposed
to withstand +/- 25 volts.  That is a little more
than normal inverters, though with some resistors and zener
diodes it could probably be done.

Much easier to buy them.  If you have +/-12V power available
the old 1488 and 1489 are pretty cheap.  If not, the MAXIM.

-- glen




Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search