Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Ray Andraka wrote: > be really careful about assuming the clocks out of a DLL are phase aligned. They are locked to a > fixed frequency relationship and nominally to a fixed phase, but can be skewed significantly by the > sum of the effects of unequal loading on the clock nets, DLL skew and input jitter. The input > jitter has bitten me once in the past giving enough skew between a 1x and 2x DLL output to cause > data direct from a flip-flop in one domain to a flip-flop in the other to get clocked in on the > wrong edge. To be safe, use some added logic to move the transfer away from the common active > adge. It doesn't have to be a FIFO, clever use of the clock enables and a circuit to synthesize > the slower clock in the faster clock's domain to use as the clock enable is sufficient. > > This question comes up often enough that IMO there ought to be a min delay vs. max skew check for this in TRCE ... maybe PAR could even try and fix up these errors with a little extra interconnect delay [selectable with a user flag of course].Article: 43976
Sometimes when I go to web sites looking for explanations of products, I feel like the cat on "Red Dwarf". He is not portrayed as an overly intelligent creature. Often he asks "What is it?", and when he hears the reply he turns to another person and repeats, "What is it?" Obviously he did not understand the answer. I just visited the Xilinx web site looking for information on the available software support tools and I am left with the question, "What is it"? This especially relates to the new ISE BaseX. I am pretty sure I remember receiving an email from Xilinx about this and it is a paid for package rather than a free one like ISE WebPack. But I don't see any significant differences other than a very few. 1) Support for older 4K device families and Spartan derivatives via pregenerated EDIF netlist. 2) CORE Generator 3) Timing Improvement Wizard - (What is it?) 4) FPGA Editor Other differences are in the list of options, WebPack does not have the options to expand some features, but then you can always expand by buying the BaseX version, right? There are a few interfaces and such that WebPack does not support as well. Most notable is what BaseX is lacking... no additional support for larger devices such as Virtex > 300K gates and Virtex II (or is VII covered by saying Virtex?). I also could not find a price. "How much is it?" So other than the CORE generator and the FPGA Editor, what good is BaseX? I don't remember the price, but those features may be worth the price. They also include phone support, but I seem to remember that support, even in the first year, is additional. Or am I thinking of the synthesis vendors? -- Rick "rickman" Collins rick.collins@XYarius.com Ignore the reply address. To email me use the above address with the XY removed. Arius - A Signal Processing Solutions Company Specializing in DSP and FPGA design URL http://www.arius.com 4 King Ave 301-682-7772 Voice Frederick, MD 21701-3110 301-682-7666 FAXArticle: 43977
Mikhail Matusov wrote: > > > Xilinx publishes two different software products: > > > > 1. Alliance > > > > It only has P&R tools. You must have a synthesizer program to generate > > gate-level description out of a hardware programming language like VHDL, > > Verilog. An example of synthesizer program is Synplicity's Synplify > > (http://www.synplicity.com). Company name is Synplicity, product name is > > Synplify. > > > > 2. Foundation > > > > This is how it used to be. I am not sure whether P&R only package is still > available, but the Foundation certainly is being phased out and replaced > with the Xilinx ISE, which is a full design environment including synthesis, > etc. Up to May 31st there was a choice of synthesis tool that one could use > with it, either FPGA Express or Xilinx XST. As far as I understand the > agreement between Xilinx and Synopsys is now over and new users can no > longer purchase FPGA Express through Xilinx. > > -- > ============================ > Mikhail Matusov > Hardware Design Engineer > Square Peg Communications > Tel.: 1 (613) 271-0044 ext.231 > Fax: 1 (613) 271-3007 > http://www.squarepeg.ca I was just on the Xilinx website looking at this and I did not find any mention of just ISE. They seem to be back to the Foundation/Alliance approach. I do remember that they had introduced a dual path with one including schematic and one without. I am guessing that they got too much negative feedback on dropping the schematic capture and so have kept it in the ISE product. But I don't believe they support the more recent parts via schematic. And if Ray Andraka has quit using schematics, I expect there is not a strong reason to do otherwise. But some wheels squeak louder than others. squeak, squeak... -- Rick "rickman" Collins rick.collins@XYarius.com Ignore the reply address. To email me use the above address with the XY removed. Arius - A Signal Processing Solutions Company Specializing in DSP and FPGA design URL http://www.arius.com 4 King Ave 301-682-7772 Voice Frederick, MD 21701-3110 301-682-7666 FAXArticle: 43978
I am writing a proposal to implement a new GPS receiver on a FPGA. A guy at the office just said that in addition to writing the GPS receiver on FPGA (Application) I will have to write a custom Stack protocol software. Could people explain this to me a bit and point me to good reference if available. Thanks Kent KrumviedaArticle: 43979
Let me add my two cents worth to this discussion. I feel that it is better to specify OFFSET IN xx AFTER and OFFSET OUT xx BEFORE rather than the way you are doing it. I am pretty sure I have used these constraints this way and they did what I intended. That is to account for the frequency (or period) and allow you to specify the input and output delays in a frequency independany way. When you spec OFFSET IN xx BEFORE, you have to take the output delay of the driving chip, add the routing delay if significant and subtract that from the period of the clock. If your clock changes, you have to change all of your IN specs. The same is true for the OFFSET OUT xx AFTER spec. This requires that you add all the setup factors and subtract from the period. If you use OFFSET IN xx AFTER or OFFSET OUT xx BEFORE, the software uses the period of the clock and does this calculation for you. All you have to do is specify the sum of all delays from the clock to the data being stable on the input pin or the delays from the output pin to the clock on the chip being driven. Not a big deal, but if you have a lot of different delays and multiple clocks, it can get pretty confusing when you need to make changes. To opimize reusability, it is good to spec the delays rather than the setup time on input and the setup time required rather than the delay on output. John Williams wrote: > > Hi folks, > > Can someone illuminate for me the physical interpretation of the OFFSET > IN xx BEFORE and OFFSET OUT xx AFTER constraints? > > I'm happy with the PERIOD constraint, that specifies the maximum delay > along a path between any two sequential elements (registers etc), and I > grasp the physical interpretation of how that relates to the maximum > clock speed. > > I think I understand the OFFSET IN xx BEFORE constraint, basically > saying the data will be on the pad XX ns before the clock goes high - > this is like a setup time right? > > However, the OFFSET OUT xx AFTER constraint is really confusign me. > Does this specify the maximum path delay from the Q output of a register > to the output pad? > > In the dummy pipeline design that I'm experimenting with on a Virtex, > I'm finding that I can meet a period constraint of 100MHz no worries, > but the OFFSET IN BEFORE and OUT AFTER constraints don't make sense, I'm > getting minimum input delays of around 7 ns and output delays > 10ns. > > With my design, the input path from a pad is straight into a register, > and similarly the output path is straight out of a register to the pad - > why am I unable to constrain the OFFSET IN BEFORE and OFFSET OUT AFTER > values to around 2 or 3 ns? I've got the P&R tools on maximum extra > effort. > > Is there somewhere a worked example/tutorial illustrating the use of > these constraints? I've scoured the Xilinx website and found notes and > papers that talk about them, but haven't seen any worked examples. > > Thanks, > > John -- Rick "rickman" Collins rick.collins@XYarius.com Ignore the reply address. To email me use the above address with the XY removed. Arius - A Signal Processing Solutions Company Specializing in DSP and FPGA design URL http://www.arius.com 4 King Ave 301-682-7772 Voice Frederick, MD 21701-3110 301-682-7666 FAXArticle: 43980
Just had this same problem figured out today with the Xilinx tech support help. The answer can be found at the following links: http://support.xilinx.com/xlnx/xil_ans_display.jsp?iLanguageID=1&iCountryID= 1&getPagePath=13461 http://www.xilinx.com/partinfo/notify/pcn0108.htm Most likely you have a device with the AFP manufacturing code... Reporting Virtex instead of Spartan 2 is normal. Spartan 2E is a different story but I haven't tried them yet. /Mikhail Damir Danijel Zagar <dzagar@srce.hr> wrote in message news:adn3ne$p9q$1@sunce.iskon.hr... > Hi, > > I've successfuly implemented design on Spartan II 200 but during > JTAG programming (Parallel Cable IV) verification fails on 257 > location. Also, when manualy identifying the JTAG chain, Spartan > is reported as Virtex. Otherwise FPGA works fine and user ID > code is readable without any problem. What could cause the > verification problems? Regards, > > Damir > > >Article: 43981
Personally, I want my FPGAs specified like all the other chips on my board: setup/hold and clk-to-out times. The IN AFTER and OUT BEFORE require the cycle time to be hard coded into the timing numbers. If I change my frequency I no longer need to reevaluate my system level setup and holds but need to respecify the timing numbers even if my setup and holds are fine. Maybe it's because I work towards "best" setup and Tco times rather than letting the system come in around them. It's all style. rickman wrote: > Let me add my two cents worth to this discussion. > > I feel that it is better to specify OFFSET IN xx AFTER and OFFSET OUT xx > BEFORE rather than the way you are doing it. I am pretty sure I have > used these constraints this way and they did what I intended. That is > to account for the frequency (or period) and allow you to specify the > input and output delays in a frequency independany way. > > When you spec OFFSET IN xx BEFORE, you have to take the output delay of > the driving chip, add the routing delay if significant and subtract that > from the period of the clock. If your clock changes, you have to change > all of your IN specs. The same is true for the OFFSET OUT xx AFTER > spec. This requires that you add all the setup factors and subtract > from the period. > > If you use OFFSET IN xx AFTER or OFFSET OUT xx BEFORE, the software uses > the period of the clock and does this calculation for you. All you have > to do is specify the sum of all delays from the clock to the data being > stable on the input pin or the delays from the output pin to the clock > on the chip being driven. > > Not a big deal, but if you have a lot of different delays and multiple > clocks, it can get pretty confusing when you need to make changes. To > opimize reusability, it is good to spec the delays rather than the setup > time on input and the setup time required rather than the delay on > output. > > John Williams wrote: > > > > Hi folks, > > > > Can someone illuminate for me the physical interpretation of the OFFSET > > IN xx BEFORE and OFFSET OUT xx AFTER constraints? > > > > I'm happy with the PERIOD constraint, that specifies the maximum delay > > along a path between any two sequential elements (registers etc), and I > > grasp the physical interpretation of how that relates to the maximum > > clock speed. > > > > I think I understand the OFFSET IN xx BEFORE constraint, basically > > saying the data will be on the pad XX ns before the clock goes high - > > this is like a setup time right? > > > > However, the OFFSET OUT xx AFTER constraint is really confusign me. > > Does this specify the maximum path delay from the Q output of a register > > to the output pad? > > > > In the dummy pipeline design that I'm experimenting with on a Virtex, > > I'm finding that I can meet a period constraint of 100MHz no worries, > > but the OFFSET IN BEFORE and OUT AFTER constraints don't make sense, I'm > > getting minimum input delays of around 7 ns and output delays > 10ns. > > > > With my design, the input path from a pad is straight into a register, > > and similarly the output path is straight out of a register to the pad - > > why am I unable to constrain the OFFSET IN BEFORE and OFFSET OUT AFTER > > values to around 2 or 3 ns? I've got the P&R tools on maximum extra > > effort. > > > > Is there somewhere a worked example/tutorial illustrating the use of > > these constraints? I've scoured the Xilinx website and found notes and > > papers that talk about them, but haven't seen any worked examples. > > > > Thanks, > > > > John > > -- > > Rick "rickman" Collins > > rick.collins@XYarius.com > Ignore the reply address. To email me use the above address with the XY > removed. > > Arius - A Signal Processing Solutions Company > Specializing in DSP and FPGA design URL http://www.arius.com > 4 King Ave 301-682-7772 Voice > Frederick, MD 21701-3110 301-682-7666 FAXArticle: 43982
How would one account for input jitter? Would the suggested result include a datasheet number for "maximum acceptable jitter?" Clock skew *is* taken into account and the DLL jitter numbers may be in there as well. If the input clock isn't clean, the DLL paths won't be "matched" and the TRCE numbers won't show what we need. Any ideas? Rick Filipkiewicz wrote: > Ray Andraka wrote: > > > be really careful about assuming the clocks out of a DLL are phase aligned. They are locked to a > > fixed frequency relationship and nominally to a fixed phase, but can be skewed significantly by the > > sum of the effects of unequal loading on the clock nets, DLL skew and input jitter. The input > > jitter has bitten me once in the past giving enough skew between a 1x and 2x DLL output to cause > > data direct from a flip-flop in one domain to a flip-flop in the other to get clocked in on the > > wrong edge. To be safe, use some added logic to move the transfer away from the common active > > adge. It doesn't have to be a FIFO, clever use of the clock enables and a circuit to synthesize > > the slower clock in the faster clock's domain to use as the clock enable is sufficient. > > > > > > This question comes up often enough that IMO there ought to be a min delay vs. max skew check for this > in TRCE ... maybe PAR could even try and fix up these errors with a little extra interconnect delay > [selectable with a user flag of course].Article: 43983
Trig functions aren't easy in a PLD, but they can be very fast. I'd recommend staying with fixed-point math, however. The barrel-shifting, etc. required for floating point eats up gates quickly and gets complicated. If you have some blockRAMs, you can use them for sin/cos lookup tables, and if not you can use pipelined Taylor expansions. Each of these takes a lot of gates for multipliers, though. If you can use a small VirtexII you can get embedded blockRAMs and multipliers. For the smallest implementation in cheaper chips go with Ray's CORDIC stuff, but I hope you have better luck figuring out how that works because I haven't yet. You might be able to split up the tasks between the processor and FPGA. I had to have an FPGA rotate a vector to compensate for distortion in a quad modulator. However, the angle of rotation was constant, so I could have the processor calculate the secant and tangent one time and that left the FPGA with only a couple of multiplies and sums to perform on each vector. -Kevin "Anton Erasmus" <junk@junk.net> wrote in message news:3d00fc71.1352331@news.saix.net... > Hi, > > I have done simple EPLDs with device like Altera EPM7064 and 10K Flex > series. So far I have used AHDL. > How difficult is it to do math functions such as axis transformations > in an EPLD ? I am trying to find out whether it would be possible to > use a smallish EPLD / FPGA such as a FLEX 10K10 or 10K20 to > provide meaningfull acceleration of axis transformation done with > an 8 bit processor. The calculations are done in floating point, and > the slowest functions are the sin, cos functions. > I would be greatfull for any input regarding this. > > Regards > Anton Erasmus >Article: 43984
John, Have you read: http://www.xilinx.com/support/techxclusives/slack-techX21.htm ? Austin John_H wrote: > How would one account for input jitter? Would the suggested result include a datasheet number for > "maximum acceptable jitter?" Clock skew *is* taken into account and the DLL jitter numbers may be in > there as well. If the input clock isn't clean, the DLL paths won't be "matched" and the TRCE numbers > won't show what we need. Any ideas? > > Rick Filipkiewicz wrote: > > > Ray Andraka wrote: > > > > > be really careful about assuming the clocks out of a DLL are phase aligned. They are locked to a > > > fixed frequency relationship and nominally to a fixed phase, but can be skewed significantly by the > > > sum of the effects of unequal loading on the clock nets, DLL skew and input jitter. The input > > > jitter has bitten me once in the past giving enough skew between a 1x and 2x DLL output to cause > > > data direct from a flip-flop in one domain to a flip-flop in the other to get clocked in on the > > > wrong edge. To be safe, use some added logic to move the transfer away from the common active > > > adge. It doesn't have to be a FIFO, clever use of the clock enables and a circuit to synthesize > > > the slower clock in the faster clock's domain to use as the clock enable is sufficient. > > > > > > > > > > This question comes up often enough that IMO there ought to be a min delay vs. max skew check for this > > in TRCE ... maybe PAR could even try and fix up these errors with a little extra interconnect delay > > [selectable with a user flag of course].Article: 43985
Thanks for the pointer, Austin. The article doesn't, however, address the issue of crossing between DLL locked clock domains on the *same edge* which is entirely internal. My external jitter issues due to low quality clocks, signal crosstalk, and simultaneously switching outputs can be taken into account within my timing budgets. I was real happy to see a thorough timing analysis with the Xilinx DDR reference design, for instance. If the su and tco in the Xilinx devices are adjusted for the jitter of the DLL, we're in good shape. There's skew due to clock loading and DLL skew the timing analyzer can tell us about. It's the input clock jitter feeding the DLL which can cause higher than expected skew between same-edge transitions that fast signals between registers on the same FPGA can clock wrong. I could see the TRCE tool supplying a "maximum input jitter" value given the same-edge needs between two DLL phase locked clocks as a viable alternative to either 1) always designing around the issue with appropriate clock phases or 2) praying to the P&R gods that everything turns out okay. For now I figure I'll take approach 1). - John_H Austin Lesea wrote: > John, > > Have you read: > > http://www.xilinx.com/support/techxclusives/slack-techX21.htm ? > > Austin > > John_H wrote: > > > How would one account for input jitter? Would the suggested result include a datasheet number for > > "maximum acceptable jitter?" Clock skew *is* taken into account and the DLL jitter numbers may be in > > there as well. If the input clock isn't clean, the DLL paths won't be "matched" and the TRCE numbers > > won't show what we need. Any ideas? > > > > Rick Filipkiewicz wrote: > > > > > Ray Andraka wrote: > > > > > > > be really careful about assuming the clocks out of a DLL are phase aligned. They are locked to a > > > > fixed frequency relationship and nominally to a fixed phase, but can be skewed significantly by the > > > > sum of the effects of unequal loading on the clock nets, DLL skew and input jitter. The input > > > > jitter has bitten me once in the past giving enough skew between a 1x and 2x DLL output to cause > > > > data direct from a flip-flop in one domain to a flip-flop in the other to get clocked in on the > > > > wrong edge. To be safe, use some added logic to move the transfer away from the common active > > > > adge. It doesn't have to be a FIFO, clever use of the clock enables and a circuit to synthesize > > > > the slower clock in the faster clock's domain to use as the clock enable is sufficient. > > > > > > > > > > > > > > This question comes up often enough that IMO there ought to be a min delay vs. max skew check for this > > > in TRCE ... maybe PAR could even try and fix up these errors with a little extra interconnect delay > > > [selectable with a user flag of course].Article: 43986
John_H wrote: > How would one account for input jitter? Would the suggested result include a datasheet number for > "maximum acceptable jitter?" Clock skew *is* taken into account and the DLL jitter numbers may be in > there as well. If the input clock isn't clean, the DLL paths won't be "matched" and the TRCE numbers > won't show what we need. Any ideas? > > Rick Filipkiewicz wrote: > > > > > > > > > This question comes up often enough that IMO there ought to be a min delay vs. max skew check for this > > in TRCE ... maybe PAR could even try and fix up these errors with a little extra interconnect delay > > [selectable with a user flag of course]. I think I was making implicit assumption that x1 & x2 clocks come from the same DLL/DCM and that the differential effect (if there is one) of input jitter on the 2 DLL outputs is known and so can be accounted for by derating the max skew between the the 2 outputs from the data book value. There might have to be a TRCE flag something like -j <max input clock jitter>Article: 43987
John, We do have plans to have the timing predictions "catch up" with the capabilities of the DCMs. Right now, you have to do this by hand, with the data sheet, and using TRCE, and also taking your input jitter into account. As far as the clock buffers internally, since they have transition times less than 100 ps (worst case), they can not be a source of more than 100 ps of jitter (it takes a slow rise, and a change in threshold to "create" jitter, or many many many stages). Internal jitter is therefore a pretty small part of the overall budget. Austin John_H wrote: > Thanks for the pointer, Austin. > > The article doesn't, however, address the issue of crossing between DLL locked clock domains on the *same > edge* which is entirely internal. My external jitter issues due to low quality clocks, signal crosstalk, and > simultaneously switching outputs can be taken into account within my timing budgets. I was real happy to see > a thorough timing analysis with the Xilinx DDR reference design, for instance. > > If the su and tco in the Xilinx devices are adjusted for the jitter of the DLL, we're in good shape. There's > skew due to clock loading and DLL skew the timing analyzer can tell us about. It's the input clock jitter > feeding the DLL which can cause higher than expected skew between same-edge transitions that fast signals > between registers on the same FPGA can clock wrong. > > I could see the TRCE tool supplying a "maximum input jitter" value given the same-edge needs between two DLL > phase locked clocks as a viable alternative to either 1) always designing around the issue with appropriate > clock phases or 2) praying to the P&R gods that everything turns out okay. > > For now I figure I'll take approach 1). > > - John_H > > Austin Lesea wrote: > > > John, > > > > Have you read: > > > > http://www.xilinx.com/support/techxclusives/slack-techX21.htm ? > > > > Austin > > > > John_H wrote: > > > > > How would one account for input jitter? Would the suggested result include a datasheet number for > > > "maximum acceptable jitter?" Clock skew *is* taken into account and the DLL jitter numbers may be in > > > there as well. If the input clock isn't clean, the DLL paths won't be "matched" and the TRCE numbers > > > won't show what we need. Any ideas? > > > > > > Rick Filipkiewicz wrote: > > > > > > > Ray Andraka wrote: > > > > > > > > > be really careful about assuming the clocks out of a DLL are phase aligned. They are locked to a > > > > > fixed frequency relationship and nominally to a fixed phase, but can be skewed significantly by the > > > > > sum of the effects of unequal loading on the clock nets, DLL skew and input jitter. The input > > > > > jitter has bitten me once in the past giving enough skew between a 1x and 2x DLL output to cause > > > > > data direct from a flip-flop in one domain to a flip-flop in the other to get clocked in on the > > > > > wrong edge. To be safe, use some added logic to move the transfer away from the common active > > > > > adge. It doesn't have to be a FIFO, clever use of the clock enables and a circuit to synthesize > > > > > the slower clock in the faster clock's domain to use as the clock enable is sufficient. > > > > > > > > > > > > > > > > > > This question comes up often enough that IMO there ought to be a min delay vs. max skew check for this > > > > in TRCE ... maybe PAR could even try and fix up these errors with a little extra interconnect delay > > > > [selectable with a user flag of course].Article: 43988
There's more than 100ps jitter between clk-in to a DLL and clk-out of that same device, isn't there? The output is delayed one or two periods from the input. Input jitter translates to "skew" between what should be aligned edges. This skew can cause timing hiccups. In the case of a clk0 and clk2x or a clk0 and clkfx in a divide by 4 configuration, are the same-edge transitions taken from the same tap (implying no additional skew between the two nets due to input jitter) or can the same-edge taps be different? The first case - one clk in, one DLL output - will produce timing problems with poor input clocks. The second case - both clocks generated by the DLL - might be clean... I'd love to know for certain. Austin Lesea wrote: > John, > > We do have plans to have the timing predictions "catch up" with the capabilities of the DCMs. Right now, you have > to do this by hand, with the data sheet, and using TRCE, and also taking your input jitter into account. > > As far as the clock buffers internally, since they have transition times less than 100 ps (worst case), they can > not be a source of more than 100 ps of jitter (it takes a slow rise, and a change in threshold to "create" jitter, > or many many many stages). Internal jitter is therefore a pretty small part of the overall budget. > > Austin > > John_H wrote: > > > Thanks for the pointer, Austin. > > > > The article doesn't, however, address the issue of crossing between DLL locked clock domains on the *same > > edge* which is entirely internal. My external jitter issues due to low quality clocks, signal crosstalk, and > > simultaneously switching outputs can be taken into account within my timing budgets. I was real happy to see > > a thorough timing analysis with the Xilinx DDR reference design, for instance. > > > > If the su and tco in the Xilinx devices are adjusted for the jitter of the DLL, we're in good shape. There's > > skew due to clock loading and DLL skew the timing analyzer can tell us about. It's the input clock jitter > > feeding the DLL which can cause higher than expected skew between same-edge transitions that fast signals > > between registers on the same FPGA can clock wrong. > > > > I could see the TRCE tool supplying a "maximum input jitter" value given the same-edge needs between two DLL > > phase locked clocks as a viable alternative to either 1) always designing around the issue with appropriate > > clock phases or 2) praying to the P&R gods that everything turns out okay. > > > > For now I figure I'll take approach 1). > > > > - John_H > > > > Austin Lesea wrote: > > > > > John, > > > > > > Have you read: > > > > > > http://www.xilinx.com/support/techxclusives/slack-techX21.htm ? > > > > > > Austin > > > > > > John_H wrote: > > > > > > > How would one account for input jitter? Would the suggested result include a datasheet number for > > > > "maximum acceptable jitter?" Clock skew *is* taken into account and the DLL jitter numbers may be in > > > > there as well. If the input clock isn't clean, the DLL paths won't be "matched" and the TRCE numbers > > > > won't show what we need. Any ideas? > > > > > > > > Rick Filipkiewicz wrote: > > > > > > > > > Ray Andraka wrote: > > > > > > > > > > > be really careful about assuming the clocks out of a DLL are phase aligned. They are locked to a > > > > > > fixed frequency relationship and nominally to a fixed phase, but can be skewed significantly by the > > > > > > sum of the effects of unequal loading on the clock nets, DLL skew and input jitter. The input > > > > > > jitter has bitten me once in the past giving enough skew between a 1x and 2x DLL output to cause > > > > > > data direct from a flip-flop in one domain to a flip-flop in the other to get clocked in on the > > > > > > wrong edge. To be safe, use some added logic to move the transfer away from the common active > > > > > > adge. It doesn't have to be a FIFO, clever use of the clock enables and a circuit to synthesize > > > > > > the slower clock in the faster clock's domain to use as the clock enable is sufficient. > > > > > > > > > > > > > > > > > > > > > > This question comes up often enough that IMO there ought to be a min delay vs. max skew check for this > > > > > in TRCE ... maybe PAR could even try and fix up these errors with a little extra interconnect delay > > > > > [selectable with a user flag of course].Article: 43989
> You mention cost as two of the reasons for using a soft CPU, but I think > you will find that the cost of a CPU includes a lot of peripherals that > are not digital in nature and therefore have to be added back to an FPGA > implementation. The clock is one, power on reset, ADC/DAC and analog > comparators are others. Flash and large RAM are other things missing > from FPGAs, they have a few kBytes in the small ones at best. The large > ones cost big, big $$$. By the time you have added back the missing > functions, you will likely have added more $$ and size than you would > have saved with the single MCU and a smaller FPGA. All of the peripherals are implemented in FPGA soft processor. e.g NIOS. Of course are limitations. Clock requires an external resistor and capacitors and a resonator, but hard processor just the same. Using an external big memory is required, but hard processor just the same. Change from digital to analog technique is simple e.g. delta-sigma DAC, inverse difficulty because an analog comparator needs. but hard processor not the same. This has some defined converters and I haven't an influence to change it. > Of course some of the other issues you have raised are not mitigated by > the MCU. Adding commands is the only one that especially seems to stand > out however. :) > The idea that a soft core makes the design process easier is not > expecially valid. Just the fact that you need to design a new C > compiler seems to make this harder. Until the new compiler has been > worked enough to fully debug, it will be a major liability. Not if soft processor is as standard hard. Then I use standard soft tools. > The idea that an FPGA has more pins is not valid if you combine an MCU > with an FPGA of a size that matches the task. Keeping them separate > helps to match the hardware to the task. If I use a soft CPU I may need > to buy a huge FPGA to get enough RAM for my task while only needing a > small amount of logic. Huge not huge it's relatively. But when I can get 8051 with 250 pins I told that I could more connect to it. > Starting a design more quickly is a feature of using an MCU. I can > start writing and testing code as soon as I have hardware while an FPGA > design is still messing with P&R. I can then integrate my FPGA design > once I have the basice MCU working. I can even use the MCU as a debug > tool for the FPGA. It's true. I have a reliable soft MCU and I can integrate it with some peripherals inside a FPGA structure. Moreover I have JTAG, it isn't implemented in all hard processors. The JTAG is the same for all soft processors. Hard processors have variously depended from producer. > So unless you have a unique design that needs little RAM and nearly no > analog functions, you will do well to use one of the many MCUs on the > market. They have been designed for tight integration of perpherals and > cost effective deployment. They are hard to beat. Are implemented a semi soft family processors. The processors has time optimized core by producer written in FPGA structure e.g. Alteras Excalibur. It's full ARM 266 MHz clock and has all soft core advantages. It's true that is an expensive solution, but it's not a unique or little, its full featured processor. JanuszRArticle: 43990
--------------048D58B233F58C1DA6DB1309 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Kevin, The hardware complexity of CORDIC is roughly equivalent of one to two multipliers depending on the FPGA, so in most cases if you are rotating a vector, CORDIC is the way to go. The exceptions are if you have only one or two fixed angles to rotate by, or if your precision requirements are low (8 bits or less generally). CORDIC isn't all that difficult to understand. Basically it is performing the rotation as a series of fixed angle rotations such that the sum of the rotations is equal to the desired total rotation. The angles are specifically chosen so that their tangents are powers of two. which leads to a shift-add operation. Starting with the rotation of a vector in a plane (a Givens rotation) straight from any trig book: x' = x cos a - y sin a y' = x sin a + y cos a divide out the cosines: x' = cos a * (x - y tan a) y' = cos a * (y +x tan a) Now, ignoring the cosine for a moment, if each elemental rotation is such that it's tangent is a power of 2 (45,26.5,14 etc degrees) the term inside the parenthesis is a shift and add/subtract x' = cos a * (x - y * 2^-i) y' = cos a * (y + x* 2^-i) where tan a = 2^-i OK, so what about that cosine? Well, if at each iteration we always rotate by either atan(2^-i) or -atan(2^-i), which is to say we always rotate by that elemental angle but we pick the rotation direction that best improves the result toward our goal. By limiting the rotation angle to +/- a, the cosine term becomes a constant since cos(a) = cos(-a). So for each iteration or elemental angle the rotation is: x[i+1] = Ki * (x[i] - y[i] * 2^-i) y[i+1] = Ki * (y[i] + x[i]*2^-i) So for a single iteration we one direction or the other by atan(2^-i) degrees. To rotate through a desired angle then, we increment i and rotate the result from the previous iteration through the new angle. The decision on the angle direction each time is made so that the rotation always rotates the vector toward the target. The sequence of the direction decisions uniquely identifies an angle in an arctangent base. A third chain of add/subtracts can be used to convert that arctangent base into any convenient angle system. Now that you hopefully understand how the rotations are done, pick up the paper to get a deeper understanding of the operation and more importantly of all the different modes and a flavor of the applications. It isn't magic, just a really neat application of algebra and trig. Kevin Neilson wrote: > Trig functions aren't easy in a PLD, but they can be very fast. I'd > recommend staying with fixed-point math, however. The barrel-shifting, etc. > required for floating point eats up gates quickly and gets complicated. If > you have some blockRAMs, you can use them for sin/cos lookup tables, and if > not you can use pipelined Taylor expansions. Each of these takes a lot of > gates for multipliers, though. If you can use a small VirtexII you can get > embedded blockRAMs and multipliers. For the smallest implementation in > cheaper chips go with Ray's CORDIC stuff, but I hope you have better luck > figuring out how that works because I haven't yet. > > You might be able to split up the tasks between the processor and FPGA. I > had to have an FPGA rotate a vector to compensate for distortion in a quad > modulator. However, the angle of rotation was constant, so I could have the > processor calculate the secant and tangent one time and that left the FPGA > with only a couple of multiplies and sums to perform on each vector. > > -Kevin > > "Anton Erasmus" <junk@junk.net> wrote in message > news:3d00fc71.1352331@news.saix.net... > > Hi, > > > > I have done simple EPLDs with device like Altera EPM7064 and 10K Flex > > series. So far I have used AHDL. > > How difficult is it to do math functions such as axis transformations > > in an EPLD ? I am trying to find out whether it would be possible to > > use a smallish EPLD / FPGA such as a FLEX 10K10 or 10K20 to > > provide meaningfull acceleration of axis transformation done with > > an 8 bit processor. The calculations are done in floating point, and > > the slowest functions are the sin, cos functions. > > I would be greatfull for any input regarding this. > > > > Regards > > Anton Erasmus > > -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759Article: 43991
Yep, stopped using schematic entry about 2-3 years ago, pretty much as soon as they gave me a gate to do placement of instantiated primitives in the source. Don't feel bad, I'm having a hard time figuring out what the different packages are too... rickman wrote: > I do remember that they had introduced a dual path with one including > schematic and one without. I am guessing that they got too much > negative feedback on dropping the schematic capture and so have kept it > in the ISE product. But I don't believe they support the more recent > parts via schematic. And if Ray Andraka has quit using schematics, I > expect there is not a strong reason to do otherwise. But some wheels > squeak louder than others. > > squeak, squeak... > > -- > > Rick "rickman" Collins > > rick.collins@XYarius.com > Ignore the reply address. To email me use the above address with the XY > removed. > > Arius - A Signal Processing Solutions Company > Specializing in DSP and FPGA design URL http://www.arius.com > 4 King Ave 301-682-7772 Voice > Frederick, MD 21701-3110 301-682-7666 FAX -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759Article: 43992
Problem is the lion's share of the skew is caused by jitter in the clock input rather than parameters inherent in the FPGA. In our case, we were inducing much of that jitter by hammering a bunch of outputs on the same bank as the clock input. Rick Filipkiewicz wrote: > Ray Andraka wrote: > > > be really careful about assuming the clocks out of a DLL are phase aligned. They are locked to a > > fixed frequency relationship and nominally to a fixed phase, but can be skewed significantly by the > > sum of the effects of unequal loading on the clock nets, DLL skew and input jitter. The input > > jitter has bitten me once in the past giving enough skew between a 1x and 2x DLL output to cause > > data direct from a flip-flop in one domain to a flip-flop in the other to get clocked in on the > > wrong edge. To be safe, use some added logic to move the transfer away from the common active > > adge. It doesn't have to be a FIFO, clever use of the clock enables and a circuit to synthesize > > the slower clock in the faster clock's domain to use as the clock enable is sufficient. > > > > > > This question comes up often enough that IMO there ought to be a min delay vs. max skew check for this > in TRCE ... maybe PAR could even try and fix up these errors with a little extra interconnect delay > [selectable with a user flag of course]. -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759Article: 43993
The Xilinx PCI core works. Yes, it is expensive. It is a very hard macro / black box. You will NOT be able to change it. I know nothing about the Wishbone core. But the biggest problem with the Xilinx core was getting it to meet timing in an XC4000. In a VirtexE, it should be easy. On Fri, 07 Jun 2002 18:45:39 GMT, "cfk" <cfk_alter_ego@pacbell.net> wrote: >I need to implement a PCI interface in a VirtexE 2000 device. I have been >studying the opencore PCI-Wishbone bridge and I can synthesize it in ISE and >load the bitstream into my VirtexE. It turns out also in this project, that >there is no motherboard with a PCI system controller, so I have implemented >control of RST, a PCICLK generator (which now works after I fixed the >rise/fall times and under/overshoot). I also have an arbiter module, so I am >creating the system controller in the same Virtex. > >I have to say that I have seen mixed comments on the opencore PCI-Wishbone >bridge, and yesterday, I attended a meeting where the subject of changing >the PCI interface from PCI32/33 to PCI64/33 was discussed. That very well >will send me towards LogiCORE as the PCI-Wishbone bridge is PCI32/33 only. > >I would appreciate a discussion of success or failure from others that have >used the PCI-Wishbone bridge and also a discussion of the relative merits of >the various Xilinx LogiCORE offerings. I see three varying in cost from $10K >to $20K and quite frankly I cant see which should be picked. I also see >mention of the fact that the IP seems to have significant strings attached >to it, in that the bitstream needs to be downloaded from the so-called >"Xpresso" cafe, and it doesnt appear that I may get to change the code for >my custom, non-motherboard implementation. > >I thank you all in advance, Charles > >Article: 43994
WebPack. But I don't see > any significant differences other than a very few. > > 1) Support for older 4K device families and Spartan derivatives via > pregenerated EDIF netlist. > > 2) CORE Generator > > 3) Timing Improvement Wizard - (What is it?) > > 4) FPGA Editor > > Other differences are in the list of options, WebPack does not have the > -- > Good questions Rickman, Based on past posts that I remember, you appear to be a frugal tools guy. I was a licensed Foundation BaseX guy, and when I saw the features of the BaseX vs Webpack, I was disturbed that I paid maintenance, while the Web pack guys almost got everything that I got, but paid nothing. I rationalized it away that $299 / year was not too much too pay for phone support and the additional features provided. Last time I checked, the baseX for ISE was $695 / year, which may be inconsequential to a large corporation, but is a hefty increase to to the small business owner. I am not a super duper power user, but IMHO, I think the FPGA Editor and it's associated methodology separates the men from the boys, although there has been some talk that it has suffered some degradation from the days of 3.1i. Coregen is a useful tool, and I cannot see myself without it for even $695 / year. The timing wizard gives you a hypertext link in the timing analyzer report that gives suggestions on how to improve upon a failed timing path. From the couple of suggestions I have looked at, it has yet to give me any earth shattering sugestions. Overall, I think that the $695 / year is a reasonable price to pay for baseX, cause it gives the user a better feel for what the full up system can do. NewmanArticle: 43995
In article <3D015A4B.9780DDA2@andraka.com>, Ray Andraka <ray@andraka.com> writes: > Problem is the lion's share of the skew is caused by jitter in the > clock input rather than parameters inherent in the FPGA. In our > case, we were inducing much of that jitter by hammering a bunch > of outputs on the same bank as the clock input. Was the clock driving those outputs the same as the input clock or asynchronous? If it's the same clock and the raw input clock is reasonably clean then I'd expect the outputs wouldn't start to switch until the clock ticked so the outputs wouldn't be switching at the wrong time. But if there is "too much" jitter on the input clock then some of the time the outputs would be switching at exactly the wrong time. Is there a positive feedback loop in there? I'm fishing for something with alternate good-bad cycles. On the good cycles the internal clock from the previous cycle via the DLL is late so nothing is switching at the wrong time. On the bad cycles, the clock from the previous cycle via the DLL is early so it makes a lot of noise which delays the clock input switching to make the clock late for the next cycle... -- The suespammers.org mail server is located in California. So are all my other mailboxes. Please do not send unsolicited bulk e-mail or unsolicited commercial e-mail to my suespammers.org address or any of my other addresses. These are my opinions, not necessarily my employer's. I hate spam.Article: 43996
>Anyway, am using a Spartan II and will use an XC18V02 PROM. When I power-up, >does the PROM begin to program the FPGA immediately after the power levels >are stable? It's all in the data sheet - might take a while to find it if you haven't tracked it down before. What do you mean by "immediately"? Remember that the PROM is passive - if you are using master-serial mode. It's waiting for clocks from the FPGA. The FPGA will go through a clear-everything stage before it starts issuing clocks. That takes a while but might be close to "immediately" if you are thinking about a different time scale. -- The suespammers.org mail server is located in California. So are all my other mailboxes. Please do not send unsolicited bulk e-mail or unsolicited commercial e-mail to my suespammers.org address or any of my other addresses. These are my opinions, not necessarily my employer's. I hate spam.Article: 43997
newman wrote: > > > Good questions Rickman, > > Based on past posts that I remember, you appear to be a frugal tools > guy. I was a licensed Foundation BaseX guy, and when I saw the > features of the BaseX vs Webpack, I was disturbed that I paid > maintenance, while the Web pack guys almost got everything that I got, > but paid nothing. I guess I am another of those WebPACK guys who don't pay for the tools, but can get the latest version. Not bad since I don't have to worry about the license expiring. Although it took 14 hours to download it with a 56K modem . . . > I rationalized it away that $299 / year was not too > much too pay for phone support and the additional features provided. > Last time I checked, the baseX for ISE was $695 / year, which may be > inconsequential to a large corporation, but is a hefty increase to to > the small business owner. I wish the license was perpetual. Didn't Xilinx used to offer perpetual licenses for their software? > I am not a super duper power user, but IMHO, I think the FPGA Editor > and it's associated methodology separates the men from the boys, > although there has been some talk that it has suffered some > degradation from the days of 3.1i. A few months ago, I really wanted to use FPGA Editor when I was attempting to meet 66MHz PCI's timings with my own PCI IP core because I felt like I can do a better job than the automatic routing tool, but didn't want to send money on tools, so I gave up on that. > Coregen is a useful tool, and I > cannot see myself without it for even $695 / year. > The timing wizard gives you a hypertext link in the timing analyzer > report that gives suggestions on how to improve upon a failed timing > path. From the couple of suggestions I have looked at, it has yet to > give me any earth shattering sugestions. > Overall, I think that the $695 / year is a reasonable price to pay > for baseX, cause it gives the user a better feel for what the full up > system can do. > > Newman I suppose another reason for getting the ISE BASE-X used to be that I believe it also came with Synopsys FPGA Compiler II. Some do say XST is better than FPGA Compiler II, but XST isn't officially supported by LogiCORE PCI. (Not sure about the reason. Does anyone know?) Now that FPGA Compiler II no longer comes with ISE, will new LogiCORE PCI users have to pay thousands of dollars more for a synthesis tool that supports LogiCORE PCI? Kevin Brace (In general, don't respond to me directly, and respond within the newsgroup.)Article: 43999
F3.1i Target:Virtex 800 In the timing report, the following warning is shown: Warning: The following connections close cycles, and some paths through these connections may not be analyzed. Is it serious? How do I avoid it? Many Thanks
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z