Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
I think Max+Plus II support Flex 8000 "luigi funes" <fuzzy8888@hotmail.com> wrote in message news:5SH56.256628$hk4.12972537@news.infostrada.it... > Hi, > Does anyone know what devices are supported by the > free Altera development software? > It is not clear on the Altera web pages, > and I would know it before download a lot of > MBytes. I'm specially interested in FLEX 8000 > family. Thanks! > > Luigi > > >Article: 28326
Hi Luigi, well the Baseline Software supports the 6K, 1K and some small 10K devices. The 8K familiy, as being an "old" familiy is not supported. But, I would not use the 8K in new designs at all, as the 6K and 1K offers good alternatives. Just my two cents... CU, CarlhermannArticle: 28327
Visit http://www.chip-guru.com/ Please Add your comments/questions in feedback section. Rajesh Bawankule (Verilog FAQ : http://www.parmita.com/verilogfaq/ Verilog Center : http://www.parmita.com/verilogcenter/ ) ------------------------------------ Chip-Guru is a quarterly magazine devoted to hardware design. It will feature a collection of articles/papers written by hardware/system engineers for hardware/system engineers. It will cover technical issues and solutions from following topics -ASIC / FPGA design methodologies -EDA Tools -Board Design -HDL (Verilog & VHDL) design / verification -Careers in Hardware Design / verification -Embedded Software Contents: In this magazine emphasis will be on technical issues and hands on problem solving. Articles will cover real life examples and their solutions. In this magazine there is no place for vendor written articles praising their own tools. Style: This is not a commercial magazine. There are no constraints on your creativity. You can write your article as small or as lengthy as you want. Make sure that they are full of illustrations to make them interesting and easy to understand. If you are interested in contributing article please email with excerpt of your paper. Rajesh Bawankule : rajesh52@hotmail.com Sent via Deja.com http://www.deja.com/Article: 28328
Hi yaohan, "yaohan" <yaohan@tm.net.my> schrieb im Newsbeitrag news:3a578c1e.0@news.tm.net.my... > I think Max+Plus II support Flex 8000 That's right, but the Baseline version of Max+PlusII, which could be downloaded from ALTERA's website is limited to the 6K, 1K and some 10K devices. Support for the greater 10K and the 8K is not included. As luigi asked for the free software, I think he ment the "Baseline". CU, CSArticle: 28329
i am designing a finite state machine (moore) with 36 outputs and 10 inputs plus 4 counters etc. to generate some of the input signals. what would be the best fpga architecture to implment it. presently i am in the last state of RTL code. i am a newbie an this is my first design i am using synopsys VSS , DC and fpga compiler. do i need any other tool to complete the design. thankz in advance kuldeep Sent via Deja.com http://www.deja.com/Article: 28330
Since you don't specify a frequency, clock to pad or setup/hold timing, any FPGA out there will do what you want to do. Be sure to keep the design as synchronous as possible for fastest timing. You will also need the specific FPGA vendor tools for place and route, timing analysis, floorplanning, etc. Simon Ramirez, Consultant Synchronous Design, Inc. Oviedo, FL USA <kkdeep@my-deja.com> wrote in message news:939jv3$bjc$1@nnrp1.deja.com... > i am designing a finite state machine (moore) with 36 outputs and 10 > inputs plus 4 counters etc. to generate some of the input signals. > what would be the best fpga architecture to implment it. presently i am > in the last state of RTL code. > i am a newbie an this is my first design > i am using synopsys VSS , DC and fpga compiler. do i need any other > tool > to complete the design. > thankz in advance > kuldeepArticle: 28331
The Xilinx on Linux HowTo has been updated. http://www.polybus.com/xilinx_on_linux.htmlArticle: 28332
In article <ysm4rziazm8.fsf@geriatrix.circlesquared.cXoXm>, Jon Schneider <jms@geriatrix.circlesquared.cXoXm> wrote: > I've been tinkering with Xilinx's Webpack software and am rather > impressed with it. > > If I now make up my own design of board up with a CPLD on it, where is > it defined exactly how to go from the Jedec file output by the tools > to doing things to the Jtag signals ? > > Ideally I expect to have to solder up a lead from say a standard > parallel port to my board then ideally take a program and fill in the > bits that say "insert code for raising tms here" and the like. Is > there such a thing I can download ? > > Isn't this a FAQ ? > > Jon > The simplest way is to buy a programming cable from xilinx (about 50$) and use the Webpack Device Programme SW. The programming cable outputs the Jtag signals on a connector which can be plugged to your PCB. The SW reads the Jedec file and does the mainpulation of the lines for you. Hope this helps. -- Klaus Falser Durst Phototechnik AG I-39042 Brixen Sent via Deja.com http://www.deja.com/Article: 28333
felix_bertram@my-deja.com wrote: > > Hello everybody, > > first of all I'd like to thank you for your valuable input, > special thanks to Austin and Hal. > > I succeeded creating a 48MHz clock from the 24Mhz, using > an XOR and a FF. I can see the clock with my scope, > it is running at 48Mhz and is approx 4ns high (and 16ns low), > with my XC2S50-TQ144-5C. The signal I probed was CLK2x from > the code below. > > I did not succeed creating a 96MHz clock from the 48MHz > using a DLL. The signal probed at CLK4x is still 48MHz, > however with a duty cycle of 50%. > > I probed the DLL's LOCKED pin- it seems to be high. > > Hal, you wrote: > >>> > Also note that the XOR doubler trick doesn't work if the > design needs the DLL to phase lock to the input clock > as well as generate faster clocks > <<< > > I do not care about the phase of my 96MHz clock, so as > far as I understood things, everything should be fine. > Is there something obvious that I missed? Felix, There's an app note about using 2 DLLs to implement a clock quadrupler (?). I think you've to let the first DLL lock before letting the second try to lock and so the second one should be held off for a while. This might be the root of your problem. Nial.Article: 28334
In article <3A560C77.5358EB91@us.ibm.com>, Greg Pfister <pfister@us.ibm.com> wrote: >Now, will somebody please explain to me why NFSMs are worth >talking about? I learned about and taught this gorp back in the >late 60s. (I certainly wouldn't waste time doing so now.) It was >sort of cute theory, not much content that I oculd see, with >little to recommend it in practice. Can't say that I see much >difference now, and there are lots more interesting things to >talk about. If you're a hardware person, DFSMs are all over the place, eg multiprocessor cache protocols, hardware cache/TLB refills, or for that matter hardwired within-an-instruction sequencing logic (the sort that might otherwise be done by microcode). I don't know for sure, but perhaps NFSMs might come in handy for investigating or reasoning about the set of all reachable states of a DFSM for design-validation purposes? If you're a software person, think of an NFSM as one way of implementing regular expressions (regexps)... which are the essence of egrep(1) and kindred programs, and are very important components of Perl, the more powerful command-line shells, all Real Programmer's Editors, and assorted other software which escapes my mind at the moment. There's a considerable (abeit rather scattered) literature on implementing regexps, with several good freeware packages, occasional usenet flamefests on which is "better" (faster, less buggy, ...). POSIX 1003.2 defines a (actually 2 different flavors of) "standard" regexp syntax/semantics, but I don't know if typical regexp packages implement it or a superset. If you're a software person of the compiler-writing flavor, NFSMs are an essential part of the formal languages background you need to solidly grok how a lexical-analyser generator (LEX, FLEX, ...) or parser generator (YACC, Bison, ANTLR, ...) works. -- -- Jonathan Thornburg <jthorn@thp.univie.ac.at> http://www.thp.univie.ac.at/~jthorn/home.html Universitaet Wien (Vienna, Austria) / Institut fuer Theoretische Physik Q: Only 7 countries have the death penalty for children. Which are they? A: Congo, Iran, Nigeria, Pakistan[*], Saudi Arabia, United States, Yemen [*] Pakistan moved to end this in July 2000. -- Amnesty International, http://www.web.amnesty.org/ai.nsf/index/AMR511392000Article: 28335
Hello I have a strange message about APEX20K_ timing violation during simulation despite I use the APEX 20KE family and that all the frequencies are met according to Quartus. Any suggestions? Thanks -- Michel Le Mer immeuble Cerium STA 12, square du Chene Germain 02 23 20 04 72 35510 Cesson-SevigneArticle: 28336
Hi! Is it possible to import Viewlogic (EDesigner) netlists into Eagle and/or vice versa? It seemed to me, that the Eagle export function produces no valid edn-netlists. Thanx, Ingo.Article: 28337
Hello everybody, thanks again for all your helpful input. I finally succeeded in creating a 4x clock, so I am a lucky guy now. > DLL starts adjusting clock delay before configuration > process of the FPGA is completed - your first clock > doubler not running still. May be DLL will act correctly, > if RST is connected to input pin and DLL resetted after > configuration is completed? Indeed, the problem seems to be related to the DLL starting to work *before* the FPGA being completely configured (so before my first clock doubler is implemented). This seems a bit odd to me, and I ask myself especially the question: What clock is the DLL running from (and locking to) before the FPGA is configured? The clock that comes into the most closely located GCLK pin? Hmm, ... However, as I said before, I am a lucky guy now, best regards Felix Bertram Sent via Deja.com http://www.deja.com/Article: 28338
"Bo Petersen" <bp@contex.dk> wrote in <zhG46.2678$%c2.67933@news010.worldonline.dk>: >Hello, I am a newbie in FPGAs and am interested especially in >how to implement a 2-D DCT using FPGA ! > >Does anyone know where I can find a spec./book which show and explain the >implementation ? > >Thanks >Bo Petersen I have to prepare a presentation for a seminar and found: Reiner W. Hartenstein Manfred Glesner Field Programmable logic Smart applications,New Paradigms and Compilers 6th International Workshop on FPGA Logic&Applications FPL'96 Darmstadt Germany 23-25.09.1996 Proceedings ISBN 3-540-61730-2 Springer contains an Chapter on how to implement a 2D-DCT with FPGA, I believe they used SRAM-FPGAs... Hope That helps Michael UngerArticle: 28339
Hi Please humour me and my lack of knowledge, I would appreciate some advice: I am looking at possibilities for a prototype radar receiver (short range, but good beamforming agility), and was looking at a DSP chip such as the TI C5410 DSP chip or similar to perform digital downconversion after the ADC (clocked at say 50MHz). I am looking into the possibilities of using an FPGA to do something similar. How do the costs compare for an FPGA comparable in processing power (the TI chip is only about $50) and are similar development tools available? Thanks for any advice Tom Sent via Deja.com http://www.deja.com/Article: 28340
Anthony, Is WebPACK reporting an error on using the keyword NET? Check out the app note on using a UCF for designs in WebPACK located at: http://www.xilinx.com/xapp/xapp352.pdf Also, does the syntax for the UCF include #PINLOCK_BEGIN (at the beginning of the UCF) and #PINLOCK_END (at the end of the UCF)? Thanks, Jennifer Anthony Ellis - LogicWorks wrote: > Hi, > > I have a CPLD design with a .ucf file that works with last year's version of > the web-pack tools. > > Now using the 3.2 ISE version of the tools the .ucf is suddenly using > unknown keywords - such as NET. > > Even compiling the design without the .ucf and then letting the lock pin > function generate a .ucf does not work. Defining this self generated UCF > file as the projects UCF file causes the mapper and contraints editor to > throw out the same syntax errors. > > Any solutions please? > > Anthony.Article: 28341
Hi! When I start a new design with partially coded (not yet fully implemented) black boxes and location constraints for a fixed pin assignment I often have the problem that the unused pins are optimized away by the synthesis tool. This results in an error of the place&root tools since the location constraint can't be met. Is it really necessary that the P&R tool results in an error if a pin in the constraint file isn't existent? I think a warning would be enough so that I don't have to set extra attributes in the VHDL/Verilog code!? Whats' the easiest way to prevent that (yet) unused pins are optimized away? Thanks, MichaelArticle: 28342
"I. Purnhagen" wrote: > > Hi! > > Is it possible to import Viewlogic (EDesigner) netlists into Eagle and/or > vice versa? > It seemed to me, that the Eagle export function produces no valid > edn-netlists. > > Thanx, Ingo. I use Eagle but not Viewlogic, so I can't answer that myself. But you are much more likely to get an answer to your question here: news://news.cadsoft.de/eagle.support.eng -- My real email is akamail.com@dclark (or something like that).Article: 28343
Some time ago I posted a message regarding Xilinx Alliance for Linux. I was wondering if Xilinx have any plans on releasing Alliance for Linux/X86? I've been using Alliance under Solaris, but I would like to run under Linux since the price/performance ratio is much better. Some people suggested using NT. I do this at the moment (well actually Windows 2000), it works, but it has some drawbacks. Here are some of them. 1) Lack of a nohup and ps command. I'm used to log in from home at night to start long PAR jobs which takes 5-20 hours to complete. The process seem to continue, but I wish I had something like nohup to start the process in the background and log all output to a file. I also wish I had a ps command to see if the darn thing is still running. I can not kill the job if I want to. bash for W2K might solve some of these issues. 2) Uptime. Every time you install a little dinky piece of software (or click in the control panel) you'll have to restart the machine. So much for uptime and running long simulations and PAR jobs. 3) X11. I can't start emacs or signalscan over my ISDN line from home like I do with Solaris or Linux. I have tried some of the remote NT login software which gives me an NT display at home, but it's very slow compared to X11. I understand that there is more than a recompile to make a Linux version (e.g. an extra dimension to their support and verification space), but I can't see why there aren't any other Xilinx users out there who wants Linux support? Petter -- ________________________________________________________________________ Petter Gustad 8'h2B | ~8'h2B http://www.gustad.com #include <stdio.h>/* compile/run this program to get my email address */ int main(void) {printf ("petter\100gustad\056com\nmy opinions only\n");}Article: 28344
Xilinx does not have a native version of their tools for Linux, however the Windows version can be run on Linux using Wine. Check out the Xilinx on Linux Howto, http://www.polybus.com/xilinx_on_linux.html I do all of my Xilinx development on Linux. The one missing piece is Synplify, you can synthesize small designs using Win4Lin but for larger designs you will still need a Solaris box somewhere. Synplicity had a beta out on Linux a few months ago but I don't think that they have officially released a Linux version yet. I haven't tried Synplify under Wine, it's possible that that might work. Petter Gustad wrote: > > Some time ago I posted a message regarding Xilinx Alliance for Linux. > I was wonderingif Xilinx have any plans on releasing Alliance for > Linux/X86? I've been using Alliance under Solaris, but I would like to > run under Linux since the price/performance ratio is much better. > > Some people suggested using NT. I do this at the moment (well actually > Windows 2000), it works, but it has some drawbacks. Here are some of > them. > > 1) Lack of a nohup and ps command. I'm used to log in from home at > night to start long PAR jobs which takes 5-20 hours to complete. > The process seem to continue, but I wish I had something like nohup > to start the process in the background and log all output to a > file. I also wish I had a ps command to see if the darn thing is > still running. I can not kill the job if I want to. bash for W2K > might solve some of these issues. > > 2) Uptime. Every time you install a little dinky piece of software (or > click in the control panel) you'll have to restart the machine. So > much for uptime and running long simulations and PAR jobs. > > 3) X11. I can't start emacs or signalscan over my ISDN line from home > like I do with Solaris or Linux. I have tried some of the remote NT > login software which gives me an NT display at home, but it's very > slow compared to X11. > > I understand that there is more than a recompile to make a Linux > version (e.g. an extra dimension to their support and verification > space), but I can't see why there aren't any other Xilinx users out > there who wants Linux support? > > Petter > -- > ________________________________________________________________________ > Petter Gustad 8'h2B | ~8'h2B http://www.gustad.com > #include <stdio.h>/* compile/run this program to get my email address */ > int main(void) {printf ("petter\100gustad\056com\nmy opinions only\n");}Article: 28345
There is a new version of Hdlmaker. Hdlmaker generates hierarchical verilog/vhdl. http://www.polybus.com/hdlmaker/hdlmaker.htmlArticle: 28346
Hey Xilinx (Austin?, Peter?) what's the real scoop on the DLL 2x outputs? Altera's got a thing on their website at http://www.altera.com/html/products/apex20KE.html indicating the jitter on the 2x output is an order of magnitude higher than the 35ps that has been bandied about before. If this is real, then how does this affect passing a signal between a 1x and 2x clock domain (from the same DLL)? I suspect it might not be able to be done reliably despite previous assurances from Xilinx. How about effects on the maximum clock rates as reported by trce, I imagine a 400 ps jitter is going to eat into that unless you have already factored that jitter into the worst case timing. Perhaps it is just an artifact of their test set-up, but I suspect the numbers may really be worse than what Xilinx portrays (And I suspect that might be the root of a problem I have been having with a design that used both 1x and 2x clocks) felix_bertram@my-deja.com wrote: > > Hello everybody, > > thanks again for all your helpful input. I finally succeeded in > creating a 4x clock, so I am a lucky guy now. > > > DLL starts adjusting clock delay before configuration > > process of the FPGA is completed - your first clock > > doubler not running still. May be DLL will act correctly, > > if RST is connected to input pin and DLL resetted after > > configuration is completed? > > Indeed, the problem seems to be related to the DLL starting to work > *before* the FPGA being completely configured (so before my first clock > doubler is implemented). This seems a bit odd to me, and I ask myself > especially the question: What clock is the DLL running from (and > locking to) before the FPGA is configured? The clock that comes into > the most closely located GCLK pin? Hmm, ... > > However, as I said before, I am a lucky guy now, > best regards > > Felix Bertram > > Sent via Deja.com > http://www.deja.com/ -- -Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com or http://www.fpga-guru.comArticle: 28347
If it can be done in a single DSP chip and meet all your requirements, you are probably better off staying there. Hardware DSP expertise and FPGA expertise are harder to find than software DSP expertise (and generally more expensive). Additionally, the software tools are at a more advanced state. That said, if the processing you are doing is capable of fitting into a C5410 at 50 MHz, you can most likely fit it into a low end FPGA such as the Xilinx Spartan series for around $10 in small quantities. I've done a large number of Radar front-ends in FPGAs (a few of which are documented in papers on my website) that do far more than a DSP processor could do at the same data rate. tomderham@my-deja.com wrote: > > Hi > > Please humour me and my lack of knowledge, I would appreciate some > advice: > > I am looking at possibilities for a prototype radar receiver (short > range, but good beamforming agility), and was looking at a DSP chip > such as the TI C5410 DSP chip or similar to perform digital > downconversion after the ADC (clocked at say 50MHz). > I am looking into the possibilities of using an FPGA to do something > similar. How do the costs compare for an FPGA comparable in processing > power (the TI chip is only about $50) and are similar development tools > available? > > Thanks for any advice > > Tom > > Sent via Deja.com > http://www.deja.com/ -- -Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com or http://www.fpga-guru.comArticle: 28348
--------------380B40F2909F9E51030EDAF5 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Ray, The straight scoop is that the 2X jitter output is 2X the clk0 jitter output. Within any one period to the next adjacent period, the jitter is still bounded by a tap change per the data sheet. It is just that in the 2X mode, you can get twice as many tap changes over a few tens of periods than in the CLK0/90/180/270 modes. Thanks to the other guy for bringing to our attention. As it turns out, they were not the first ones to bring this to our attention. We are preparing a little something with the jitter compared between the DLL and the other guys PLL, both with and without internal logic happening, and with and without IO's switching (and we used their board!) under the same conditions and with similar designs (as closely as we can make them). Let's just say that we are not going to publish it on the web site, as that would be trivializing a serious issue (control and management of jitter). But we will make it available through the FAE's so that we can share our systems experience and knowledge with our customers. The control and manangement of jitter will probably become an app note one day. Theoretically, a PLL should beat out the DLL by miles when it comes to jitter. It is all in the implementation, and the layout, and power supply decoupling, and the isolation, and the process lot parameters, and the filter parameters, and oops!..... (again, why state the obvious?). It is all well and good to connect up a digital sampler to a clock input, and push the "jitter" button, but you have to do a hell of a lot more than that. What kind of "jitter" are you measuring: period jitter, ITU-T jitter, stochastic stationary jitter, band pass filtered jitter...? Is your signal integrity perfect (bad rise times = bad jitter!)? What is your clock source jitter? Did you know that P-P jitter increases the longer you measure it? How many samples is good enough? Ever heard of "left and right hand curve fitting"? What is your jitter noise floor? How much jitter is added by going on, and getting off the chip (just IO's)? Does your jitter add arithmetically? What is the spectral content of the jitter? How long do you have to wait to get enough samples? Is the equipment capable of sampling every single event? Did you look at the resulting eye pattern? With psuedorandom data? .... I am not going to tell the whole world here all about jitter measurement methodology ( I spent 12 years at T1X1.3 and in the lab on the subject) ... I'll let the other guy learn the same way I did (and it wasn't all fun and games, although we had some really good lunches and dinners with folks from Lucent, Nortel, Alcatel, Bell Labs, Tellabs, MCI, Sprint, Motorola, .... in those 12 years). Anyone who wants a preview of the measurements may email me directly. As for cascading DLL's, the input tolerance is about 3X what is in the data sheet so the DLL's down the line are going to lock, and remain locked. We are having trouble measuring the actual tolerance as no one makes test equipment that generates that much HF jitter energy. If you have a specific problem with the DLL's, please email me directly, or open a hotline case (I can see them, too). Austin Ray Andraka wrote: > Hey Xilinx (Austin?, Peter?) what's the real scoop on the DLL 2x outputs? > Altera's got a thing on their website at > http://www.altera.com/html/products/apex20KE.html indicating the jitter on the > 2x output is an order of magnitude higher than the 35ps that has been bandied > about before. If this is real, then how does this affect passing a signal > between a 1x and 2x clock domain (from the same DLL)? I suspect it might not be > able to be done reliably despite previous assurances from Xilinx. How about > effects on the maximum clock rates as reported by trce, I imagine a 400 ps > jitter is going to eat into that unless you have already factored that jitter > into the worst case timing. Perhaps it is just an artifact of their test > set-up, but I suspect the numbers may really be worse than what Xilinx portrays > (And I suspect that might be the root of a problem I have been having with a > design that used both 1x and 2x clocks) > > felix_bertram@my-deja.com wrote: > > > > Hello everybody, > > > > thanks again for all your helpful input. I finally succeeded in > > creating a 4x clock, so I am a lucky guy now. > > > > > DLL starts adjusting clock delay before configuration > > > process of the FPGA is completed - your first clock > > > doubler not running still. May be DLL will act correctly, > > > if RST is connected to input pin and DLL resetted after > > > configuration is completed? > > > > Indeed, the problem seems to be related to the DLL starting to work > > *before* the FPGA being completely configured (so before my first clock > > doubler is implemented). This seems a bit odd to me, and I ask myself > > especially the question: What clock is the DLL running from (and > > locking to) before the FPGA is configured? The clock that comes into > > the most closely located GCLK pin? Hmm, ... > > > > However, as I said before, I am a lucky guy now, > > best regards > > > > Felix Bertram > > > > Sent via Deja.com > > http://www.deja.com/ > > -- > -Ray Andraka, P.E. > President, the Andraka Consulting Group, Inc. > 401/884-7930 Fax 401/884-7950 > email ray@andraka.com > http://www.andraka.com or http://www.fpga-guru.com --------------380B40F2909F9E51030EDAF5 Content-Type: text/html; charset=us-ascii Content-Transfer-Encoding: 7bit <!doctype html public "-//w3c//dtd html 4.0 transitional//en"> <html> Ray, <p>The straight scoop is that the 2X jitter output is 2X the clk0 jitter output. <p>Within any one period to the next adjacent period, the jitter is still bounded by a tap change per the data sheet. It is just that in the 2X mode, you can get twice as many tap changes over a few tens of periods than in the CLK0/90/180/270 modes. Thanks to the other guy for bringing to our attention. As it turns out, they were not the first ones to bring this to our attention. <p>We are preparing a little something with the jitter compared between the DLL and the other guys PLL, both with and without internal logic happening, and with and without IO's switching (and we used their board!) under the same conditions and with similar designs (as closely as we can make them). <p>Let's just say that we are not going to publish it on the web site, as that would be trivializing a serious issue (control and management of jitter). But we will make it available through the FAE's so that we can share our systems experience and knowledge with our customers. The control and manangement of jitter will probably become an app note one day. <p>Theoretically, a PLL should beat out the DLL by miles when it comes to jitter. It is all in the implementation, and the layout, and power supply decoupling, and the isolation, and the process lot parameters, and the filter parameters, and oops!..... (again, why state the obvious?). <p>It is all well and good to connect up a digital sampler to a clock input, and push the "jitter" button, but you have to do a hell of a lot more than that. What kind of "jitter" are you measuring: period jitter, ITU-T jitter, stochastic stationary jitter, band pass filtered jitter...? Is your signal integrity perfect (bad rise times = bad jitter!)? What is your clock source jitter? Did you know that P-P jitter increases the longer you measure it? How many samples is good enough? Ever heard of "left and right hand curve fitting"? What is your jitter noise floor? How much jitter is added by going on, and getting off the chip (just IO's)? Does your jitter add arithmetically? What is the spectral content of the jitter? How long do you have to wait to get enough samples? Is the equipment capable of sampling every single event? Did you look at the resulting eye pattern? With psuedorandom data? .... <p>I am not going to tell the whole world here all about jitter measurement methodology ( I spent 12 years at T1X1.3 and in the lab on the subject) ... I'll let the other guy learn the same way I did (and it wasn't all fun and games, although we had some really good lunches and dinners with folks from Lucent, Nortel, Alcatel, Bell Labs, Tellabs, MCI, Sprint, Motorola, .... in those 12 years). <p>Anyone who wants a preview of the measurements may email me directly. <p>As for cascading DLL's, the input tolerance is about 3X what is in the data sheet so the DLL's down the line are going to lock, and remain locked. We are having trouble measuring the actual tolerance as no one makes test equipment that generates that much HF jitter energy.<i></i> <p>If you have a specific problem with the DLL's, please email me directly, or open a hotline case (I can see them, too). <p>Austin <p>Ray Andraka wrote: <blockquote TYPE=CITE>Hey Xilinx (Austin?, Peter?) what's the real scoop on the DLL 2x outputs? <br>Altera's got a thing on their website at <br><a href="http://www.altera.com/html/products/apex20KE.html">http://www.altera.com/html/products/apex20KE.html</a> indicating the jitter on the <br>2x output is an order of magnitude higher than the 35ps that has been bandied <br>about before. If this is real, then how does this affect passing a signal <br>between a 1x and 2x clock domain (from the same DLL)? I suspect it might not be <br>able to be done reliably despite previous assurances from Xilinx. How about <br>effects on the maximum clock rates as reported by trce, I imagine a 400 ps <br>jitter is going to eat into that unless you have already factored that jitter <br>into the worst case timing. Perhaps it is just an artifact of their test <br>set-up, but I suspect the numbers may really be worse than what Xilinx portrays <br>(And I suspect that might be the root of a problem I have been having with a <br>design that used both 1x and 2x clocks) <p>felix_bertram@my-deja.com wrote: <br>> <br>> Hello everybody, <br>> <br>> thanks again for all your helpful input. I finally succeeded in <br>> creating a 4x clock, so I am a lucky guy now. <br>> <br>> > DLL starts adjusting clock delay before configuration <br>> > process of the FPGA is completed - your first clock <br>> > doubler not running still. May be DLL will act correctly, <br>> > if RST is connected to input pin and DLL resetted after <br>> > configuration is completed? <br>> <br>> Indeed, the problem seems to be related to the DLL starting to work <br>> *before* the FPGA being completely configured (so before my first clock <br>> doubler is implemented). This seems a bit odd to me, and I ask myself <br>> especially the question: What clock is the DLL running from (and <br>> locking to) before the FPGA is configured? The clock that comes into <br>> the most closely located GCLK pin? Hmm, ... <br>> <br>> However, as I said before, I am a lucky guy now, <br>> best regards <br>> <br>> Felix Bertram <br>> <br>> Sent via Deja.com <br>> <a href="http://www.deja.com/">http://www.deja.com/</a> <p>-- <br>-Ray Andraka, P.E. <br>President, the Andraka Consulting Group, Inc. <br>401/884-7930 Fax 401/884-7950 <br>email ray@andraka.com <br><a href="http://www.andraka.com">http://www.andraka.com</a> or <a href="http://www.fpga-guru.com">http://www.fpga-guru.com</a></blockquote> </html> --------------380B40F2909F9E51030EDAF5--Article: 28349
Petter Gustad <spam@gustad.com> writes: > Some time ago I posted a message regarding Xilinx Alliance for Linux. > I was wondering if Xilinx have any plans on releasing Alliance for > Linux/X86? I've been using Alliance under Solaris, but I would like to > run under Linux since the price/performance ratio is much better. I asked about this a few months ago. A Xilinx employee said that they don't plan to offer such a thing, since there's "no demand".
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z