Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 112225

Article: 112225
Subject: Re: JTAG connection for chipscope
From: "Trevor Coolidge" <tjc-sda@cox.com>
Date: Fri, 17 Nov 2006 23:34:33 -0800
Links: << >>  << T >>  << A >>
Ditto for EDK.  My experience is that you can only use JTAG for 1 thing at a 
time (when using Xilinx applications).

Trevor

"yttrium" <yttrium@telenet.be> wrote in message 
news:8mm3h.6751$Pr5.6212@blueberry.telenet-ops.be...
> John Adair wrote:
>> One thing to check is that Impact is not open when using Chipscope. I
>> have seen cases of Impact affecting chipscope operation.
>>
>> John Adair
>> Enterpoint Ltd.
>>



Article: 112226
Subject: Re: FPGA's for Ethernet?
From: "Trevor Coolidge" <tjc-sda@cox.com>
Date: Fri, 17 Nov 2006 23:54:23 -0800
Links: << >>  << T >>  << A >>
Todd,

   I have tried a lot of solutions...

1)  Using the opencores core embedded inside Altera (Cyclone I) with NIOS 
and a purchsed stack.  The preformance was dreadful.  Now, this was about 5 
years ago now, so things must be better.  The fastest we could ever get the 
pig to sustain was something around 20 mbits/second.

2)  External PCI-based chip with a PCI bridge implemented in the FPGA using 
embedded Linux.  This was a real fast and "sexy" solution.  We were seeing 
sustained rates around 78 mbits/sec.

3)  Virtex 4 with the embedded hard core macro using embedded Linux.  Sweet, 
but we had to write a driver.  Performance was awsome but it took a while to 
get working due to the software writing exercise.

4)  We used an external Ethernet to USB adaptor (bought at Fry's) with a 
Linux driver.  Then we stuck a USB core in the Xilinx and ran embedded 
Linux.  This was done to prove that we didn't need PCI (solution 2).  The 
nice part about this solution was the guaranteed bandwidth allocation.

5)  We used the Rabbit silicon as well.  It was real nice - the crap just 
worked with no fuss.

As for the interconnect with the DSP ... I would ask if the algorithms can 
be implemented in the FPGA's directly.  I am not a DSP expert, but both 
Xilinx and Altera are winning a lot of traditional DSP designs these days. 
The DSP guys that I know all love the FPGA's.

As for chosing a vendor - they both have workable solutions.  If you have a 
good FAE supporting you for Altera - stick with them.  I have done several 
SOC-type designs from each family and every single one of them had "wierd" 
problems related to the tools - without good support I'd have been dead 
meat.  Just glance at all the compiler/fitter/mapper problems posted here. 
As much as they try for it not to happen ... we end up holding the bag.

Trevor




"Todd" <tschoepflin@gmail.com> wrote in message 
news:1162255661.055244.134860@e64g2000cwd.googlegroups.com...
> Hi all
>
> I'm a design engineer trying to evaluate the large number of
> possibilities for adding Ethernet to our embedded system.
>
> So far I've been very impressed by the Altera Cyclone II with NIOS II
> and free lightweight TCP/IP stack.   Adding Ethernet appears to amount
> to the Cyclone II and a MAC+PHY chip like LAN91C111 (or equivalent).
>
> Anyone have experience with using the Cyclone II merely for Ethernet?
> Should I try to put the MAC inside the FPGA and just use an external
> PHY?
>
> Any recommendations for a communication protocol between the FPGA and
> my DSP?  SPI seems the most obvious choice for reasonably high
> bandwidth (>6 Mbps).  Right now my DSP runs from a 1.5 Mbps UART so
> mimicking this data flow would save me a bunch of assembly code
> changes. However, I'd like to send more data back to the host so could
> use upwards of 6 Mbps.
>
> Also, I'm interested in general recommendations for System on a
> Programmable Chip (SOPC), which Altera is obviously highly interested
> in advancing.  It seems very attractive since I could eventually get
> rid of the DSP by simply creating a second NIOS II processor within the
> FPGA and porting my assembly code to C.  The upgrade path is
> straightforward and indefinite since Altera will keep coming up with
> even better FPGAs.  Any caveats or warnings?  Lastly, are there major
> reasons I should be considering Xilinx instead?
>
> Thanks in advance for the help!
> -Todd
> 



Article: 112227
Subject: Re: How stable is the internal clock of a Xilinx CPLD?
From: "Trevor Coolidge" <tjc-sda@cox.com>
Date: Fri, 17 Nov 2006 23:59:42 -0800
Links: << >>  << T >>  << A >>
I have actually built one out of a chain of inverters before and I would say 
a few things...

1)  It was cool, but the frequency was all over the place.
2) As the manufacturing process changed so did my circuit's operation.  It 
made it real hard to support over the 5 years the product was shipping.  I 
think we saved $0.80 per board, but spent $200K over 5 years recertifying 
the products due to die changes.

Trevor



"Peter Alfke" <alfke@sbcglobal.net> wrote in message 
news:1162270545.016307.190650@m73g2000cwd.googlegroups.com...
> Sorry, I was lazy and anwered what the clock would be, if there were
> one. In fact, there is no internal clock, unless you build one out of a
> chain of inverter/noninverters.
> Get a crystal oscillator, they are cheap and good!
> Peter Alfke
>



Article: 112228
Subject: Re: Have you experience to program the APA series using FlashPro Lite?
From: "Trevor Coolidge" <tjc-sda@cox.com>
Date: Sat, 18 Nov 2006 00:07:06 -0800
Links: << >>  << T >>  << A >>
Here's a couple of other possibilities ...  JTAG termination (pull-up's), 
JTAG connections (TDO connected to another TDO for instance), and my 
favorite ... does the chip have power?  One other thing I once saw - there 
was a version of Actel APA parts that you could buy that required them to be 
programmed with a stand-alone programmer before they would program 
in-system.  As I recall they were pre-set at the factory to disable JTAG 
access.  I think Actel changed the default about 18 months ago, so unless 
the parts are old that wouldn't be it.

As Alan mentioned, though - usually it means a trace has been broken from 
the chip to the header connector.

Trevor



"Alan Myler" <amyler@eircom.net> wrote in message 
news:4541C774.1040907@eircom.net...
> kypilop wrote:
>
>> I have flashpro-lite programmer of actel. But, in use, very
>> uncomportable because i don't know how to solve the error..... Any one
>> have the error books?
>> This is the errors i faced...
>>
>> programmer 'FPL31LPT1' : Scan Chain...
>> Error: programmer 'FPL31LPT1' : Signal Integrity Failure
>>        Integrity Check Pattern Not Found.
>>        Integrity Check Pattern :
>>        550FAAF000FF0000FFFF
>>        IrScan Error.
>>        TDO stuck at 0
>>        Chain Analysis Failed.
>> Error: programmer 'FPL31LPT1' : Data Bit length : 8272
>> Error: programmer 'FPL31LPT1' : Compare Data  :
>> 0000............................00000
>> Error: programmer 'FPL31LPT1' : Scan Chain FAILED.
>> Error: Failed to run programming.
>>
>> Well.... Do you know what is the problem and solution? Plz, let me know
>> your experiences...
>>
>
>
> I've had similar symptoms with Flashpro (not -Lite) when the APA was
> dead or when there was a physical disconnect on the board between the
> Flashpro header and the JTAG pins on the APA (error in pcb schematic).
>
> Can you try another board?
>
>
> 



Article: 112229
Subject: Re: Xilinx Virtex-4 Clock Multiplexer Inputs
From: "Trevor Coolidge" <tjc-sda@cox.com>
Date: Sat, 18 Nov 2006 00:19:38 -0800
Links: << >>  << T >>  << A >>
I have yet to have success in 20 years of programmable design using a 
programmable part to do much with clocks and not make my life heck!  The 
only solution I found that always works as I expect is to externally 
generate a clock, feed that into the FPGA, use a DCM (or PLL) to lock onto 
the clock if they run fast enough and run everything in the FPGA off that 
clock!

As for 500MHz - wow.  I am just starting a 250 MHz design and I was worried.

Trevor




>
> Regardless of the issue of utilising 100% of the IO pins, the onboard 
> clock multiplexers do a lot of the heavy lifting for you. I would suggest 
> multiplexing ordinary IO rather than the clocks.
>
> On using 100% of IO - in a practical design where space (and cost) are at 
> a premium, I will use a device that has *what I need* and no more. Now I 
> know that means it's tough to reconfigure and add signals, to say nothing 
> of looking at the internal state by toggling lines appropriately (although 
> that can be done if you're sneaky enough about it), but in a shipping 
> design I need to look at cost - and IO pins (because they enlarge the 
> package) are a very high cost on an FPGA.
>
> I have a design in development right now where I am at the limit of IO 
> pins on an FPGA and I am not going to add a separate device (except 
> perhaps a single SPI IO device - cheap enough, but there are other 
> issues), nor move to a larger FPGA (because I have to move to a larger 
> core to get more IO in this particular family).
>
> Using 2 smaller devices may or may not be appropriate - if you need _lots_ 
> of IO and a small amount of logic it might work, but there's still power 
> to be run, interfaces to be set up etc., and then when adding the cost of 
> configuration devices (and you have to put those down if the resources in 
> the FPGA are required for a processor at boot time) it's easily possible 
> to exceed the cost of a large FPGA with two small ones, to say nothing of 
> footprints (config devices are in god-awfully large packages).
>
> So it's not an easy question to answer, but there _are_ times it's 
> perfectly reasonable to have used 100% of FPGA IO pins.
>
> Cheers
>
> PeteS 



Article: 112230
Subject: Re: PCB Design Houses
From: "Trevor Coolidge" <tjc-sda@cox.com>
Date: Sat, 18 Nov 2006 00:41:05 -0800
Links: << >>  << T >>  << A >>
Matthew,

   I have a guy that I use on these types of jobs, but he is in California. 
We have done many multi-processor designs to fit in small space that he has 
layed out.  He has also done work all the way up to 5 GHz (although he's not 
really an RF guy).  The only complaint I have is that he always 
under-estimates the layout time (he will tell you 4 weeks for the board you 
described.  I'll tell you he'll take double that).  In the end, the board 
will be beautiful looking, he will have hand-routed all of the nets you 
marked as critical, and he will probably catch a few errors you made.  Along 
the way he will probably frustrate you - I long ago just started to double 
all of his estimates (so that I don't lose sleep).  In 10 years he has never 
failed to deliver a quality product.  As for travelling ... if you insist, 
he'll probably go on a "kick-off" or "review" basis, but he won't do on-site 
work.
  Just tell him I told you to call ... his company is CADWORKS 
(949-461-9211).  I couldn't recommed anyone more.

Trevor


"Matthew Hicks" <mdhicks2@uiuc.edu> wrote in message 
news:eh5qr4$5v1$1@news.ks.uiuc.edu...
> My research group is building a mezzanine card that contains a lot of 
> critical nets.  We have four 500MHz DSPs and a V2P FPGA all connected 
> together in a network.  Each DSP also has access to SDRAM.  Can anyone 
> suggest a PCB design house that would route the board for us, someone used 
> to dealing with signal integrity of many high-speed digital signals in 
> smaller form factors?  If at all possible, we would prefer someone in 
> Illinois or a consultant that could come to Illinois.
>
>
> ---Matthew Hicks
> 



Article: 112231
Subject: IDELAY Calibration - Virtex 4
From: "Trevor Coolidge" <tjc-sda@cox.com>
Date: Sat, 18 Nov 2006 01:16:48 -0800
Links: << >>  << T >>  << A >>
   I am working on a design that takes a source-synchronous LVDS bus (8 data 
bits + clock, SDR) at a high speed (250MHz) as the input off a cable.  I 
will be using the XC4VLX15 part.  I read in the databook about the 
IDELAY/IDELAYCTRL operation that can be used to account for package and 
routing skews.  What I don't quite understand is the calibration process.  I 
understand the timing (64 taps @ 78.125Mhz with non-accumulating error) and 
how an n-tap delay line works.  I just don't understand how to calibrate a 
signal dynamically without an input signal being present.  Since my data and 
clock source is external (and I have no idea when data will start) I don't 
understand how the calibration can take place.  When the cable is idle, the 
clock line will toggle, but the data lines will all be static.  (I do see 
that an idle pattern of FF-00-FF-00... could be handy).  When the cable is 
not connected though everything will be 0 - even the clock.
   The other option seems to be the fixed delay approach - then I just need 
to use the static timing analyzer to account for the flight time and routing 
delays so that they all "line-up" right on the balls.  The datasheet claims 
a maximum flight-time skew of 80ps for any pin in the device (+/- 1 on the 
tap count).  Then, I just need to look up the PAD --> IDELAY.D, divide by 
78ps and add 1 for each of the 8 data bits.  Am I missing something?

   I can't tell you how many times in my life I wished I could slide the 
clock or data to account for internal chip routing delays.  Especially on 
pad to pad delay paths that went through a flip-flop.

   This seems too good to be true - so I must be missing something.

Trevor Coolidge





 



Article: 112232
Subject: Re: Influence of temperature and manufacturing to propagation delay
From: "Trevor Coolidge" <tjc-sda@cox.com>
Date: Sat, 18 Nov 2006 02:00:34 -0800
Links: << >>  << T >>  << A >>
While I see that you found a solution by recompiling faster, I have a few 
"time-honored" solutions to this problem as well (Usually reserved for 
expensive, low volume systems that need to ship today (to get a paycheck 
next month)).

1)  Check your input voltage for the power supply pins.  Sometimes people 
will use a LPF on the power supplies, but the series resistance (probably 
not a precision resistor) tollerance may result in the power being within 
spec, but lower than normal.  I like using an adjustable LDO to drive each 
FPGA power group (especially Vint).  The other nice thing about that 
approach is that you can "tweak" up the power supply to the upper limit by 
piggybacking a parallel resistor on the ADJ pin.  So, for instance instead 
of riding at 2.5V nominal you can make it 2.6V.  The higher supply rail will 
help compensate for the temperature slow-down.

2)  Use the static timing analyzer to look at the slowest paths.  Then see 
if errors in these paths are a likely cause of the symptoms you are seeing. 
Usually they are and the recompile solution may work.  Unfortunately in a 
lot of production systems recompiling results in a whole lot of 
requalification effort.  In that event, trimming noted above is usually 
easier.

3)  Get some air movement going.  It doesn't take much air movement at all 
to make the junction temperature significantly lower.  Touching it with your 
hand may not be a great test.  If the device has a very higher thermal 
resistance, you won't feel anything, but it'll be cooking inside.  (I saw a 
VAX catch fire this way many years ago ... and it was still booting).

Trevor




"Thomas Reinemann" <tom.reinemann@gmx.net> wrote in message 
news:1163529222.557776.216940@k70g2000cwa.googlegroups.com...
Hi,

we are running in trouble with our curent design for a Xilinx Spartan 3
xc3s1500.

It does signal processing and it seems that sample got lost with
increasing temperature. Immediately after power on all works well, some
minutes later, if final temperatures is reach, some samples are missed.
I hadn't a thermometer ready, but I can always touch the FPGA for a
long time, it may be 50C.
It runs with a clock of 76.8 MHz, PAR states a maximum frequency of
78.777MHz, and logic utilization is about 60%.

One board works as expected and two other show the explained effect,
the boards have the same layout but are made by different
manufacturers. At least the not working are lead free.

Just now, we had a discusion to the influence of temperature to
propagation delay. I don't believe that it influences clock lines and
other logic resources in a (big) different way. Is It true or not?

I read the thread "Propagation delay sensitivity to temperature,
voltage, and manufacturing", but the answers are very related to DCMs.

Tom



Article: 112233
Subject: Re: pulse jitter due to clock
From: Kolja Sulimma <news@sulimma.de>
Date: Sat, 18 Nov 2006 12:19:36 +0100
Links: << >>  << T >>  << A >>
nospam schrieb:
> Al <alessandro.basili@cern.ch> wrote:
> 
> 
>>Hi to everyone,
>>I'm developing some electronics to make a time measurement with a 
>>resolution of 25 ps. I'm using a dedicated ASIC to do so 
> 
> 
> Let me guess an Acam part?
Could also be from MSC in Darmstadt, but as he has a CERN email address
I am sure he is using the HPTDC developed at CERN. The HPTDC homepage
has vanished, but we use it in one of our TDC boards:
http://cronologic.de/products/time_measurement/hptdc/

>>Can anyone say something about this? Does it sound reasonable?

Slow input slopes create crosstalk in the HPTDC. Therefore it makes
sense to have extremely fast LVDS input buffers in front of the chip
anyway. If you use buffers with enable (or an AND-gate) you can control
that from the FPGA to mask the signals. No need to route the signals
through the FPGA.

You can contact us directly if you have more detailed questions
regarding the HPTDC and FPGAs.

Kolja Sulimma

Article: 112234
Subject: Re: pulse jitter due to clock
From: Kolja Sulimma <news@sulimma.de>
Date: Sat, 18 Nov 2006 12:23:34 +0100
Links: << >>  << T >>  << A >>
PeteS schrieb:

> I have seen a single via add 50ps of deterministic jitter on fast
> signals (edge rate about 10^4V/us) on FR4-13. I have no idea what PCB
> material you are using or intend to use, but keep this in mind.

This applies for serial datastreams were reflections from previous edges
add jitter to the following edges. In time measurement applications
the edges are extremly rare. Before the next edge any reflections will
long have settles. Therefore you do not care about any pulse form
modification as long as it is deterministic.

Kolja Sulimma

Article: 112235
Subject: Re: Xilinx Virtex-4 Clock Multiplexer Inputs
From: PeteS <peter.smith8380@ntlworld.com>
Date: Sat, 18 Nov 2006 11:50:44 GMT
Links: << >>  << T >>  << A >>
Trevor Coolidge wrote:
> I have yet to have success in 20 years of programmable design using a 
> programmable part to do much with clocks and not make my life heck!  The 
> only solution I found that always works as I expect is to externally 
> generate a clock, feed that into the FPGA, use a DCM (or PLL) to lock onto 
> the clock if they run fast enough and run everything in the FPGA off that 
> clock!
> 
> As for 500MHz - wow.  I am just starting a 250 MHz design and I was worried.
> 
> Trevor
> 
> 
> 
> 
>> Regardless of the issue of utilising 100% of the IO pins, the onboard 
>> clock multiplexers do a lot of the heavy lifting for you. I would suggest 
>> multiplexing ordinary IO rather than the clocks.
>>
>> On using 100% of IO - in a practical design where space (and cost) are at 
>> a premium, I will use a device that has *what I need* and no more. Now I 
>> know that means it's tough to reconfigure and add signals, to say nothing 
>> of looking at the internal state by toggling lines appropriately (although 
>> that can be done if you're sneaky enough about it), but in a shipping 
>> design I need to look at cost - and IO pins (because they enlarge the 
>> package) are a very high cost on an FPGA.
>>
>> I have a design in development right now where I am at the limit of IO 
>> pins on an FPGA and I am not going to add a separate device (except 
>> perhaps a single SPI IO device - cheap enough, but there are other 
>> issues), nor move to a larger FPGA (because I have to move to a larger 
>> core to get more IO in this particular family).
>>
>> Using 2 smaller devices may or may not be appropriate - if you need _lots_ 
>> of IO and a small amount of logic it might work, but there's still power 
>> to be run, interfaces to be set up etc., and then when adding the cost of 
>> configuration devices (and you have to put those down if the resources in 
>> the FPGA are required for a processor at boot time) it's easily possible 
>> to exceed the cost of a large FPGA with two small ones, to say nothing of 
>> footprints (config devices are in god-awfully large packages).
>>
>> So it's not an easy question to answer, but there _are_ times it's 
>> perfectly reasonable to have used 100% of FPGA IO pins.
>>
>> Cheers
>>
>> PeteS 
> 
> 

My toughest challenges in programmable logic (and I've been doing it a 
long time too) have been going across clock domains where the dataflow 
is bidirectional, and things are quite fast.

FPGAs will add some jitter and it's difficult to know the exact amount 
(there's a very recent thread about this) and it depends on the loading 
of the clocks amongst other things. I try to separate functional blocks 
and clock the minimum number of FFs with any one clock, but I still get 
the occasional problem.

Cheers

PeteS

Article: 112236
Subject: Input setup time & Output valid delay
From: "jajo" <jmunir@gmail.com>
Date: 18 Nov 2006 04:12:19 -0800
Links: << >>  << T >>  << A >>
I am new in this field and I have been working with ISE Xilinx tool.
When I want to create a testbench waveform I have to insert several
values, two of them are: Input setup time & Output valid delay. I do
not understand them:

1. What do they mean?. Are they related to external devices?.
2. How can I know which values they have?.


Thank you!.

Jajo


Article: 112237
Subject: Re: memory init in Altera bitfiles, (like data2mem) is it possible?
From: "Antti" <Antti.Lukats@xilant.com>
Date: 18 Nov 2006 05:04:39 -0800
Links: << >>  << T >>  << A >>
Jim Granville schrieb:

> Antti wrote:
> > Thomas Entner schrieb:
> >
> >
> >>Hi Antti,
> >>
> >>are you using "Smart Compilation" ?
> >>
> >>Plain Quartus: If you have a design compiled with "Smart Compilation"
> >>enabled, and then change just a memory-content-file and restart compilation,
> >>a magic "MIF/HEX Updater" (or similar) appears after the Fitter-process
> >>(which is skipped by smart compilation) and does what you want. I suppose
> >>this is also doable on the command-line. But don't ask me about the
> >>NIOS-tool-flow, you know, I am using ERIC5 ;-)
> >>
> >>Thomas
> >>
> >
> > Thomas,
> > I dont want smart recompile! I want NO COMPILE.
> >
> > Compile once.
> >
> > Merge new ELF file into the SOF file n - times.
>
>   I think Antti is after a solution that does not need Full Quratus,
> but is just a 'insert code' step.
>   As he says, simple enough, and surprising it is not there
> aready.
>
>   There must be many teams, where the software is separate from the
> FPGA development, and it is both quicker and safer to avoid
> any rebuild of the FPGA.
>
>   Maintenance/version control, is another area where this
> ability gets important.
>
>   Could you not find the portion of Quartus that Jesse mentioned ?
> Amongst the choices, he said:
> " you can even update your .sof file very quickly with onchip ram
> contents without risk of triggering an entire
> re-compile. I cannot recall the exact syntax of the command but I
> believe the compilation step is the Quartus Assembler (quartus_asm)"
>
> - and hopefully, that command line is both a small EXE, and not
> needing a license install :)
>
> -jg

Jim,

you are right I am looking for a solution that can update the generated
.SOF file with the content of software object without *any* FPGA
toolflow except "SOF patching". Of course it has to work without the
need of license for a given part - the SOF is generated with licensed
tools, afterwards only GNU GCC compiler and sofpatcher are used.

I just generated 29 .BIT files for all known Virtex-4 device-package
combinations - now the MicroFpga user can just run GCC and he has
working FPGA bit file even for Virtex LX200 or FX140, without the need
to have FPGA toolchain license to those devices or MicroBlaze EDK
license !

just for info generating 29 (actually 29+3) BIT files for ALL V-4
device-packages did complete in just about 5 hours. Not so bad at all,
I was calculating with more compile times.

Antti


Article: 112238
Subject: Re: IDELAY Calibration - Virtex 4
From: "Antti" <Antti.Lukats@xilant.com>
Date: 18 Nov 2006 05:14:42 -0800
Links: << >>  << T >>  << A >>

Trevor Coolidge schrieb:

> I am working on a design that takes a source-synchronous LVDS bus (8 data
> bits + clock, SDR) at a high speed (250MHz) as the input off a cable.  I
> will be using the XC4VLX15 part.  I read in the databook about the
> IDELAY/IDELAYCTRL operation that can be used to account for package and
> routing skews.  What I don't quite understand is the calibration process.  I
> understand the timing (64 taps @ 78.125Mhz with non-accumulating error) and
> how an n-tap delay line works.  I just don't understand how to calibrate a
> signal dynamically without an input signal being present.  Since my data and
> clock source is external (and I have no idea when data will start) I don't
> understand how the calibration can take place.  When the cable is idle, the
> clock line will toggle, but the data lines will all be static.  (I do see
> that an idle pattern of FF-00-FF-00... could be handy).  When the cable is
> not connected though everything will be 0 - even the clock.
>    The other option seems to be the fixed delay approach - then I just need
> to use the static timing analyzer to account for the flight time and routing
> delays so that they all "line-up" right on the balls.  The datasheet claims
> a maximum flight-time skew of 80ps for any pin in the device (+/- 1 on the
> tap count).  Then, I just need to look up the PAD --> IDELAY.D, divide by
> 78ps and add 1 for each of the 8 data bits.  Am I missing something?
>
>    I can't tell you how many times in my life I wished I could slide the
> clock or data to account for internal chip routing delays.  Especially on
> pad to pad delay paths that went through a flip-flop.
>
>    This seems too good to be true - so I must be missing something.
>
> Trevor Coolidge

Hi Trevor,

the IDELAY calibrates against 200MHz clock.
there is no need to have any of the IOs toggling or having any signal,
all you need is 200MHz reference clock. In early xilinx docs this
200MHz was required to come directly from external source, but later
it was relaxed so it is allowed to generat the 200MHz using a DCM.

Antti


Article: 112239
Subject: Re: memory init in Altera bitfiles, (like data2mem) is it possible?
From: "jbnote" <jbnote@gmail.com>
Date: 18 Nov 2006 06:03:28 -0800
Links: << >>  << T >>  << A >>
Hello,

You can do a pseudo "No-recompile" in the altera flow by doing
back-annotation on your design. This will allow you to store the full
PAR results of your design, quite similar to the XDL format. This is
available on the command-line as

quartus_cdb design_name --back_annotate=lab (get back mapping / placing
in the ".qsf", IIRC)
quartus_cdb design_name --back_annotate=routing (get back routing in a
".rcf" file)

Then recompile by specifying to use these constraints files. This is
done in the qsf file by specifying:
set_global_assignment -name ROUTING_BACK_ANNOTATION_MODE NORMAL
set_global_assignment -name ROUTING_BACK_ANNOTATION_FILE
design_name.rcf

However, I have only tried this on very small designs, but I think this
may be effective on bigger ones too. I'm interested in your results if
you go this way ! (and i'll investigate precise RAM init in the sof
soon).

JB


Article: 112240
Subject: Re: memory init in Altera bitfiles, (like data2mem) is it possible?
From: "Antti" <Antti.Lukats@xilant.com>
Date: 18 Nov 2006 06:11:36 -0800
Links: << >>  << T >>  << A >>
jbnote schrieb:

> Hello,
>
> You can do a pseudo "No-recompile" in the altera flow by doing
> back-annotation on your design. This will allow you to store the full
> PAR results of your design, quite similar to the XDL format. This is
> available on the command-line as
>
> quartus_cdb design_name --back_annotate=lab (get back mapping / placing
> in the ".qsf", IIRC)
> quartus_cdb design_name --back_annotate=routing (get back routing in a
> ".rcf" file)
>
> Then recompile by specifying to use these constraints files. This is
> done in the qsf file by specifying:
> set_global_assignment -name ROUTING_BACK_ANNOTATION_MODE NORMAL
> set_global_assignment -name ROUTING_BACK_ANNOTATION_FILE
> design_name.rcf
>
> However, I have only tried this on very small designs, but I think this
> may be effective on bigger ones too. I'm interested in your results if
> you go this way ! (and i'll investigate precise RAM init in the sof
> soon).
>
> JB

hi JB,

this is something that is really interesting for some purposes, but
currently I really am
looking for solution to have NO compile, only sof and elf merging...

here is dump of the SOF parser:

Version: Quartus II Compiler Version 6.0 Build 202 06/20/2006 Service
Pack 1 SJ Full Version
Device: EP1C6Q240C6
OCP: (V6AF7P00A2;V6AF7PBCEC;V6AF7PBCE1;)
Option 18: FF00FFFF
Option :19 (?) 16
Option :17 (logic) 139900
Option :21 (ram) 433 
Option 29: 20006E00,CF19B6F1
CRC16: FE54

Antti


Article: 112241
Subject: Static Timing Analysis vs Dinamic Timing Analysis
From: "jajo" <jmunir@gmail.com>
Date: 18 Nov 2006 06:45:53 -0800
Links: << >>  << T >>  << A >>
Hi!,

Could anybody explain me these concepts and their differences?. And
what is done in foundation ISE tool of xilinx?.

Thanx

Jajo


Article: 112242
Subject: Re: pulse jitter due to clock
From: Austin Lesea <austin@xilinx.com>
Date: Sat, 18 Nov 2006 08:05:48 -0800
Links: << >>  << T >>  << A >>
Symon,

Well, yes they are differential across the chip.

And, what they accomplish is less jitter than if they had been single ended.

It is quite a battle:  voltage goes down, distances get longer (for 
smaller wires), more stuff is switching, etc.  Gains made may not appear 
to be substantial, yet without them, the result would have been far 
worse (no small gain, but a huge loss of performance).

Austin


Symon wrote:

> "Austin Lesea" <austin@xilinx.com> wrote in message 
> news:ejkus8$kkk1@cnn.xsj.xilinx.com...
> 
>>>So, that's a cool thing. Did you guys do any measurements on the jitter
>>>performance of this? I.e. how much jitter is added to a differential data
>>>signal coming out of an IOB clocked by a BUFIO driven from a differential
>>>clock coming to the FPGA 'Clock Capable' pins.?
>>
>>Yes, we have performed a great deal of characterization.  And the clock
>>capable pins, or even a plain IOB has no real difference in jitter
>>performance.
>>
> 
> Hi Austin,
> Thanks for getting back! Your reply surprised me; I now wonder just what 
> does the diff clock routing bring to the party if not better jitter 
> performance? BTW, are the regular global clock networks differential?
> Thanks, Syms. 
> 
> 

Article: 112243
Subject: Re: IDELAY Calibration - Virtex 4
From: "Trevor Coolidge" <tjc-sda@cox.com>
Date: Sat, 18 Nov 2006 08:36:22 -0800
Links: << >>  << T >>  << A >>
Thank you for the clarification.

"Antti" <Antti.Lukats@xilant.com> wrote in message 
news:1163855682.873228.289260@e3g2000cwe.googlegroups.com...
>
> Trevor Coolidge schrieb:
>
>> I am working on a design that takes a source-synchronous LVDS bus (8 data
>> bits + clock, SDR) at a high speed (250MHz) as the input off a cable.  I
>> will be using the XC4VLX15 part.  I read in the databook about the
>> IDELAY/IDELAYCTRL operation that can be used to account for package and
>> routing skews.  What I don't quite understand is the calibration process. 
>> I
>> understand the timing (64 taps @ 78.125Mhz with non-accumulating error) 
>> and
>> how an n-tap delay line works.  I just don't understand how to calibrate 
>> a
>> signal dynamically without an input signal being present.  Since my data 
>> and
>> clock source is external (and I have no idea when data will start) I 
>> don't
>> understand how the calibration can take place.  When the cable is idle, 
>> the
>> clock line will toggle, but the data lines will all be static.  (I do see
>> that an idle pattern of FF-00-FF-00... could be handy).  When the cable 
>> is
>> not connected though everything will be 0 - even the clock.
>>    The other option seems to be the fixed delay approach - then I just 
>> need
>> to use the static timing analyzer to account for the flight time and 
>> routing
>> delays so that they all "line-up" right on the balls.  The datasheet 
>> claims
>> a maximum flight-time skew of 80ps for any pin in the device (+/- 1 on 
>> the
>> tap count).  Then, I just need to look up the PAD --> IDELAY.D, divide by
>> 78ps and add 1 for each of the 8 data bits.  Am I missing something?
>>
>>    I can't tell you how many times in my life I wished I could slide the
>> clock or data to account for internal chip routing delays.  Especially on
>> pad to pad delay paths that went through a flip-flop.
>>
>>    This seems too good to be true - so I must be missing something.
>>
>> Trevor Coolidge
>
> Hi Trevor,
>
> the IDELAY calibrates against 200MHz clock.
> there is no need to have any of the IOs toggling or having any signal,
> all you need is 200MHz reference clock. In early xilinx docs this
> 200MHz was required to come directly from external source, but later
> it was relaxed so it is allowed to generat the 200MHz using a DCM.
>
> Antti
> 



Article: 112244
Subject: Re: Input setup time & Output valid delay
From: "Trevor Coolidge" <tjc-sda@cox.com>
Date: Sat, 18 Nov 2006 08:43:53 -0800
Links: << >>  << T >>  << A >>
Jajo,

They are related to external connections.  The setup time is asking "How 
long before your clock source is the data guaranteed to be valid".  The 
output valid delay time is asking "How long after your input signals does it 
take for your output to be valid".  The testbench waveform can use a 
complete system timing model to generate a valid simulation for post-routed 
designs.

Assuming you are just getting started ... set them to 0 (ideal) for now.  It 
will allow you to get your logic correct.  Then once your device targeting 
is done you can deal with the timing issues.  (Since you don't know what 
they are you are probably not even really targeting a device yet).

Trevor



"jajo" <jmunir@gmail.com> wrote in message 
news:1163851939.665330.300000@h48g2000cwc.googlegroups.com...
>I am new in this field and I have been working with ISE Xilinx tool.
> When I want to create a testbench waveform I have to insert several
> values, two of them are: Input setup time & Output valid delay. I do
> not understand them:
>
> 1. What do they mean?. Are they related to external devices?.
> 2. How can I know which values they have?.
>
>
> Thank you!.
>
> Jajo
> 



Article: 112245
Subject: Re: pulse jitter due to clock
From: "Trevor Coolidge" <tjc-sda@cox.com>
Date: Sat, 18 Nov 2006 08:52:03 -0800
Links: << >>  << T >>  << A >>
Al,

   My experience is that FPGA's definitely add jitter.  The amount added 
depends on the device loading.  I spent some time using test equipment to 
measure induced jitter.  My observed numbers were no where near the 15pS you 
have quoted.  The best I could ever discern was around 100pS.  This was done 
about 1.5 years ago, so the technology has changed.

   I read an approach on this board where someone suggested using a Virtex 
4, with multiple inputs simultanously compared and a calibration procedure 
to lower the signal uncertainty.

Trevor


"Al" <alessandro.basili@cern.ch> wrote in message 
news:ejkk93$1cb$1@cernne03.cern.ch...
> Hi to everyone,
> I'm developing some electronics to make a time measurement with a 
> resolution of 25 ps. I'm using a dedicated ASIC to do so but I'm giving 
> the signals to the ASIC through an FPGA.
> The way is very simple, basically I have some signals coming to my fpga 
> which I will mask with some combinatorial logic and a configurable 
> register so that I can allow some measurements or some others. The output 
> of this "masking" will go to the ASIC.
> They assert (and here is the question) that a clocked device as an FPGA 
> may add some jitter to the signals due to the substrate  current overload 
> (for the presence of the clock) that will lead to some 15 ps jitter over 
> the signals. I don't know how they could resolve this value but I'm 
> assuming they were telling the truth about numbers (at least, while I have 
> some doubts about explanation of those numbers).
> Can anyone say something about this? Does it sound reasonable?
>
> Al
>
> -- 
> Alessandro Basili
> CERN, PH/UGC
> Hardware Designer 



Article: 112246
Subject: IDELAY setup/hold
From: "Tom" <tom.derham@gmail.com>
Date: 18 Nov 2006 09:07:30 -0800
Links: << >>  << T >>  << A >>
According to the Virtex 4 data sheet, the setup/hold window widens
considerably when using non-zero IDELAY delay (either in a normal input
block or as part of ISERDES)...

For example, for a -10 part, DDR mode, setup 8.84, hold -6.51 (p28 of
data sheet) gives a minimum data valid window (assuming perfect clk
alignment,no jitter, etc) of 2.23ns, meaning the clk frequency for DDR
data must be much less than 224MHz... seemingly somewhat less than
might be implied from Table 15.

These setup and hold are defined for "D pin" with respect to "CLK".
Assuming that "D pin" refers to the D pin top left of Fig 8.1, p352 of
Virtex 4 user guide, this seems meaningless if the IDELAY is used to
delay the data (e.g to compensate for skew of the bus on the PCB).
Should these values be shifted according to the delay applied by the
IDELAY, or do they in fact these values apply to D and CLK of the
register flip flops themselves AFTER the IDELAY - e.g as shown IFF1 etc
towards the right of Fig 7.1 p309?

I would be grateful for any clarification as I can't work it out from
the datasheet!

Many thanks

Tom


Article: 112247
Subject: Re: PCMCIA interface
From: "vasile" <piclist9@gmail.com>
Date: 18 Nov 2006 09:27:03 -0800
Links: << >>  << T >>  << A >>

John Adair wrote:
> Even if you implement Cardbus (essentially PCI) rather than
> PCMCIA(essentially ISA) you won't get continuous 33MHz transfer other
> than short periods of time. ExpressCard format can go this fast
> providing the architecture behind it can support that data rate.

Hi John,
I didn't heard about this standard before. It's compatible with most
PCMCIA interfaces
available on laptops ? There is somewhere a documentation available ?

thx,
Vasile

>
> John Adair
> Enterpoint Ltd. - Home of Tarfesock1. The Cardbus FPGA Development
> Board.
>
> vasile wrote:
> > Hi everybody,
> >
> > A part of a project I'm designing is a PCMCIA bus card with a  32 bit
> > data system bus.
> > The system included in the card has a multicore DSP, an ARM processor,
> > some NOR FLASH, SDRAM memory and a FPGA.
> > The FPGA is used for the PCMCIA interface to the system bus and some
> > high speed math as a companion for DSP. The purpose of the whole PCMCIA
> > interface is to transfer some data from the SDRAM into PC, in real time
> > at 33Mhz clock rate. The card data system bus is running at 133MHz.
> >
> > How you'll chose the design for the best card bus interface, knowing
> > there are some fast processes on the internal bus:
> >
> > a. using the FPGA as a slave memory selected by the DSP and
> > implementing a FIFO inside the FPGA . An interrupt request will notice
> > the PC to start download data and empty the FIFO.
> > b. using DMA control over the system bus from the FPGA (FPGA as master,
> > DSP as slave)
> > c. other (please detail)
> > 
> > thank you,
> > Vasile


Article: 112248
Subject: Re: PCMCIA interface
From: zwsdotcom@gmail.com
Date: 18 Nov 2006 09:59:28 -0800
Links: << >>  << T >>  << A >>

vasile wrote:

> I didn't heard about this standard before. It's compatible with most
> PCMCIA interfaces

It's unrelated to PCMCIA and is completely incompatible, having a
different connector. A few recent laptops support it, and it will
probably (eventually) replace PCMCIA. In the meantime we have the joy
of "legacy-free"!


Article: 112249
Subject: Re: memory init in Altera bitfiles, (like data2mem) is it possible?
From: Dennis Ruffer <druffer@speakeasy.net>
Date: Sat, 18 Nov 2006 11:34:50 -0700
Links: << >>  << T >>  << A >>
On 2006-11-18 07:11:36 -0700, "Antti" <Antti.Lukats@xilant.com> said:

> here is dump of the SOF parser:
> 
> Version: Quartus II Compiler Version 6.0 Build 202 06/20/2006 Service
> Pack 1 SJ Full Version
> Device: EP1C6Q240C6
> OCP: (V6AF7P00A2;V6AF7PBCEC;V6AF7PBCE1;)
> Option 18: FF00FFFF
> Option :19 (?) 16
> Option :17 (logic) 139900
> Option :21 (ram) 433 Option 29: 20006E00,CF19B6F1
> CRC16: FE54
> 
> Antti

I suspect that there is another layer of parsing involved.  Here's what 
I'm getting from a relatively complex model that we are using:

Quartus II Compiler Version 5.0 Build 171 11/03/2005
 Service Pack 2 SJ Full Version
Device: EP2S60F1020C3
OCP: (V6AF7PBCEC;V6AF7PBCE1;)
Option 18: FF00FFFF
OPT:19 16
 1073124: 00 00 00 00  20 00 20 00 -
		  00 00 01 00  FF FF FF FF  .... . .........

OPT:17 1923240
 1073124: 00 00 00 00  00 00 DC C4 -
		  EA 00 01 00  00 00 00 00  ................

OPT:21 6928
 1073124: 00 00 00 00  00 00 1C D8 -
		  00 00 01 00  00 00 00 81  ................

Option 23: 2
CRC16: 83F8  ok

However, thanks for the start.

Now, to dig into the RAM

DaR




Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search