Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Hi, I have four modules/entities in my design. when I am synthesizing the first module it gives 4% of device utilization. But when I integrate it with other modules means I synthesize the whole design tha individual module takes 50% of device(which was previously taking 4%). what can be the probable cause for it? waiting for your answer.Article: 38876
newman wrote: > > I was able to find the problem, which appears to be related to Xilinx > > tools. > > Here it is: > > 1) 3 sets of 14 outputs each when checked with a logic analyzer have > > the correct data, but all th bits are inverted. > > > > 2) The design was simulated functionally before routing -- all is well > > (the outputs are not inverted) > > > > 3) The design was simulated functionally after the routing (.ncd --> > > .vhd) > > -- all is well (the outputs are not inverted) > > > > 4) Same as 3) but with timing info (.sdf file) -- all is well (the > > outputs are not inverted) > > > > 5) Manually verified (by using FPGA Editor) that at least 1 of the > > outputs does not get inverted. > > > > The only other step that was performed was the generation of .bit file > > from .ncd file. I do not know of a way to go fro .bit file to .ncd > > file (or .vhd) to verify the functionality after bit file generation. > > I believe that this is where the outputs become inverted. The problem > > was fixed by channeling all the desired outputs through the inverted > > input to each IOB via FPGA Editor, and then regenerating the new bit > > file. > > > > > > Thanks for all help. > > > > Yury > > Yury, > > Very interesting. I was just reading about the Verplex Conformal > Logic Equivalency Checker (LEC) in the Xcell Journal Issue 41. If > what you say is correct, it would not have found the problem either, > cause it uses the Post-Par netlist as its final step, and your timing > sim indicated no problem there. > > Nice find, thanks for sharing it with us. > > Newman Seconded. Bugs in bitgen are the nastiest kind since there's no way of finding them short of an external attack with an LA or some such. The worst I can remember was when my design suddenly stopped working, just total death, with 2.1iSPN [N=2 I think]. I went the same route, simulated post-PAR, and all was o.k. After some effort I found by adding probes that the DLL output was not functioning. No clocks => no life. Back to the answers database and, lo & behold, a bug report saying that SP2 bitgen was mis-configuring the DLL! About a year or so before that I'd fallen over a similar problem where the hprep6 JEDEC file generator was screwing up on XC95KXL parts. I wonder if there's some way of using JBits to go from .bit -> .xdl -> .ncd.Article: 38877
Peter Alfke wrote: > Kevin Brace wrote > > > The thing about LUT (Look Up Table)-based FPGAs is that all > > those AND, OR, and NOT gates get converted, and mapped into LUTs when > > you compile your schematics (synthesis in HDL), mapped it to LUTs, and > > P&R the circuit. > > Like LUTs, FFs are fixed resources of the FPGA, so FFs won't be created > > from LUTs. > > I don't know the detailed history of LUT-based FPGAs, but I heard that > > Xilinx was the first company to come up with a 4-input LUT-based FPGA. > > Yes, a 4-input LUT can implement any function of 4 variables, and there exist > 65,536 different functions of four variables, counting all permutations etc. > ( Anybody I interviewed for a Xilinx Applications Engineering job during the > past 14 years had to be able to derive or prove this, otherwise they flunked > the interview.) > > Are they allowed to say 64K instead ? I still have to work out the decimal equivalent on my fingers & toes :-). If they can compute the number of logically inequivalent functions do they get a design job ?Article: 38878
Russell Shaw napisal(a): >> According to Altera datasheet I made my own ByteBlaster programmer >> board for programming FLEXes and MAXes. Up to now everything worked >> with minor problems. There was only a limitation of the programmer >> cable lenght. Two weeks ago I have purchused a new fast computer >> (Athlon 1600+). Then I have realized that I can not program any >> chips. Cutting the cable down to 20cm helped but it is rather >> difficult to work with such a short cable. Changing BIOS parallel port >> properties did not help - I tried this. And there is a question: are >> there any problems with the original ByteBlaster (LPT) board used with >> these new fast computers or I should rather go for USB programmer >> cable? Does it work with +5V chips? Is it expensive? > >IIRC, the published byteblasterMV schematic doesn't show the >bypass cap. Put a 100nF cap across the supply of the HC244 or >whatever. I don't remember if the cap was on the original schematic, but I have put 100n and 47u tantalum on my board. -- Pozdrowienia, Marcin E. Hamerla "Nienawidze turystow."Article: 38879
"Sudip Saha" <sudipsaha5@yahoo.com> schrieb im Newsbeitrag news:ee74771.-1@WebX.sUN8CHnE... > Hi, > I have four modules/entities in my design. when I am synthesizing the first module it gives 4% of device utilization. But when I integrate it with other modules means I synthesize the whole design tha individual module takes 50% of device(which was previously taking 4%). what can be the probable cause for it? Depending on how you synthesize the single module it is possible that much logic get optimized out, because some signals are not connectet to anything. -- MfG FalkArticle: 38880
Peter Alfke wrote: > > Kevin Brace wrote > > > The thing about LUT (Look Up Table)-based FPGAs is that all > > those AND, OR, and NOT gates get converted, and mapped into LUTs when > > you compile your schematics (synthesis in HDL), mapped it to LUTs, and > > P&R the circuit. > > Like LUTs, FFs are fixed resources of the FPGA, so FFs won't be created > > from LUTs. > > I don't know the detailed history of LUT-based FPGAs, but I heard that > > Xilinx was the first company to come up with a 4-input LUT-based FPGA. > > Yes, a 4-input LUT can implement any function of 4 variables, and there exist > 65,536 different functions of four variables, counting all permutations etc. > ( Anybody I interviewed for a Xilinx Applications Engineering job during the > past 14 years had to be able to derive or prove this, otherwise they flunked > the interview.) Well, i would have said there's 16 entries, and the permutations don't matter (just affects the order the inputs are connected).Article: 38881
"rickman" <spamgoeshere4@yahoo.com> kirjoitti viestissä:3C539B8F.63AA7699@yahoo.com... > You don't even have to connect the two chains. If there is no provision > to add other components to the scan list, then it won't work. So you > only need to look at the documentation or the software. Well, yes. That absolutely saves most efficiently but after that you have to try it out. :) I checked how it is in (Xilinx) WebPack 4.1. After starting up the programm (Impact, comes with the WebPack software package), initializing the cable and scanning the system you can clean up the JTAG chain. After that by right clicking the mouse there is a pop-up menu where are options for adding Xilinx AND also non-Xilinx device to the chain. If you try to add non-Xilinx device the programmer asks whether the user has BSDL or BIT file for the device. If he/she doesn't, the programmer continues to next menu where you can actually define the device and enter information to create BSDL file for later use. So that is the theory and I did not try any further. I do have the kickstart and several Insight Memec demokits so I will eventually try it out one day, though not very soon. :| But I do encourage anyone who reads this, has the MSP430 + cpld kits and actually soon is bound to build a combined system, carry this out and post the results to this forum. :-) > > -- > > Rick "rickman" Collins Greetings, MattiArticle: 38882
Sorry, Russell, no job offer. ;-) There are 2, 3, 4-input AND, NAND, OR, NOR, XOR, XNOR functions and all sorts of combinations of them, including multiplexers. And then you can have an inversion on any input, etc. But the better reasoning goes like this: To test the functionality 100%, one must obviously apply sixteen "test vectors" to the group of four inputs. At the output, this will create a sequence of 16 bits. Treat that as a word, and, voila, there are 64K different 16-bit words. Yes, some are stupid ( stuck High, etc) and some are the result of input permutations, but there are still a lot of useful functions. I started using this as an interview test question 30 years ago at Fairchild, long before Xilinx, and I have run it by many, many hundreds of applicants. The difference in approaches to the solution is astounding. From a correct answer in 5 seconds to a half-hour argument... Afterwards most candidates agreed that this is a good test of logic thinking. Come to think of it, it is even a smart and efficient way to implement logic on a programmable chip... Peter Alfke, Xilinx Applications Russell Shaw wrote: > Well, i would have said there's 16 entries, and the permutations don't > matter (just affects the order the inputs are connected).Article: 38883
Peter Alfke wrote: > Sorry, Russell, no job offer. ;-) > > There are 2, 3, 4-input AND, NAND, OR, NOR, XOR, XNOR functions and all sorts of > combinations of them, including multiplexers. > And then you can have an inversion on any input, etc. > > But the better reasoning goes like this: > To test the functionality 100%, one must obviously apply sixteen "test vectors" to > the group of four inputs. At the output, this will create a sequence of 16 bits. > Treat that as a word, and, voila, there are 64K different 16-bit words. Yes, some > are stupid ( stuck High, etc) and some are the result of input permutations, but > there are still a lot of useful functions. > > I started using this as an interview test question 30 years ago at Fairchild, long > before Xilinx, and I have run it by many, many hundreds of applicants. The > difference in approaches to the solution is astounding. From a correct answer in > 5 seconds to a half-hour argument... > Afterwards most candidates agreed that this is a good test of logic thinking. > > Come to think of it, it is even a smart and efficient way to implement logic on a > programmable chip... > > Peter Alfke, Xilinx Applications > One thing I've always wondered since I first came across LUT based architectures is: ``What is the LUT equivalent of Quine-McClusky sum of products reduction ?'' is there is such a thing. i.e is there a logic reduction algorithm that's always guaranteed to find an implementation of any logic function in the minimum number of ``levels'' ? Assuming all the inputs are equally critical and ignoring things like F5MUXs, carry chains, etc.Article: 38884
Rick Filipkiewicz wrote: > Are they allowed to say 64K instead ? I still have to work out the decimal > equivalent on my fingers & toes :-). Yes, and " two to the 16th power" is even better. PeterArticle: 38885
Having implemented a couple of designs using the PCI core, I have to say the transaction termination documentation wasn't great. However that is why the example code is given, and it seems to work fine. Mike. "Eric Crabill" <eric.crabill@xilinx.com> wrote in message news:3C5389F5.AABCFF83@xilinx.com... > > Hi, > > I am sorry that you have found the description of COMPLETE a bit, > um, fuzzy. The example code is provided to obscure the "corner" > cases in using this signal. > > In general, asserting COMPLETE tells the core to deassert FRAME# > at the next opportunity. COMPLETE feeds into the next state logic > for FRAME#, which is registered in an IOB output flip flop. > > While this sounds fairly simple, there are several corner cases > because this core is pipelined. > > The user application will be keeping track of how many dataphases > it wants to perform (the "transfer counter"). The counter changes > based on the M_DATA_VLD signal, which indicates that a dataphase > has been completed. However, M_DATA_VLD is a registered function > of IRDY# and TRDY#. > > So, from IRDY# and TRDY#, to the user via M_DATA_VLD, through some > logic to COMPLETE, and back to FRAME#, there are two flip flops. > Hence, the "corner cases": > > // Corner case one, I only wanted one dataphase in the first place > // and need to assert COMPLETE before I even get any indication > // that data transfer has taken place. Assert it immediately. > > assign FIN1 = CNT1 & REQUEST; > > // Corner case two, I only wanted two dataphases in the first place > // and need to assert COMPLETE so that I will get two dataphases. > // If I wait for confirmation of two dataphases, it is too late to > // deassert FRAME#. The INIT_WAITED term is there to stall this > // event if the user application is inserting wait states as an > // initiator. Assert it one cycle after we enter the M_DATA state, > // unless we are inserting wait states. > > assign FIN2 = CNT2 & M_DATAQ & !INIT_WAITED; > > // General case, I am bursting three or more dataphases, and want to > // assert COMPELTE at the right time to get that many dataphases. When > // The transfer count hits three, and M_DATA_VLD asserts, that means in > // the last clock cycle, on the bus, I have transferred the dataphase > // "n-2" of an "n" dataphase burst. > > assign FIN3 = CNT3 & M_DATA_VLD; > > // Assert the COMPLETE signal if either of those three have taken place. > // There is another term that holds COMPLETE asserted until the > deassertion > // of M_DATA which I have left out. It is in the design guide. > > assign ASSERT_COMPLETE = FIN1 | FIN2 | FIN3; > > Hope this helps, > Eric > > David Miller wrote: > > > > > However, from what I see in an initiator write waveforms in LogiCORE > > > PCI Design Guide Page 13-8 (Figure 13-3), Page 13-10 (Figure 13-5), > > > and Page 14-19 (Figure 14-6), COMPLETE is asserted on the user side > > > at the same the user side loads the last data on ADIO. > > > > Yeah, that's pretty much what I thought, however the language is rather > > vague about the exact, detailed interpretation of this signal. I had > > hoped for my reading of the waveforms (and my experimentations with > > simulations) to be confirmed by someone else (hello Xilinx!) who has > > direct experience with the core. > > > > > Looking at the waveforms, COMPLETE seems to be asserted until M_DATA > > > is deasserted, like the way you worded. > > > > You are supposed to assert COMPLETE until the transaction is completed. > > > > > If the initiator is doing only a single cycle transfer, COMPLETE > > > seems like it has to be asserted on the next cycle REQUEST is > > > asserted (Figure 12-4, 12-5, 12-6, 12-7, and 12-8), and has to be > > > kept asserted until M_DATA is deasserted. > > > > In fact, empirically, it seems that you can assert it as late as that > > first cycle in which M_DATA and M_SRC_EN go high and still transfer only > > one word. I imagine that they suggest that COMPLETE and REQUEST be > > asserted at the same time for ease of implementation. > > > > > So, from what I see, it looks like COMPLETE assertion timing will be > > > different depending on whether or not the transfer is a single or a > > > burst. In practice, initiator transfer can be interrupted by a > > > > You see why I find the exact interpretation of COMPLETE somewhat unclear. :) > > > > > target disconnect or a target abort, and when that happens, > > > COMPLETE seems to get ignored (FRAME# will be deasserted if already > > > asserted because STOP# was deasserted.). > > > > Right. I made some major changes to the initiator logic in my design > > here, and I will see how these changes affect the result. The CSR > > vector provides all sorts of useful bits (documented in chapter 2 of the > > user guide) for determining transaction state. When I get these changes > > into simulation, I'll see what the timing of these bits is like. > > > > PERL programmers are familiar with the expression TMTOWTDI[1]. I think > > TMTOWTDI applies equally well to Xilinx's PCI logicore, but it is > > difficult to be certain that you aren't digging a hole for yourself by > > doing things differently from the examples that Xilinx provide without > > strict definitions of the relevant controls. > > > > At the same time it would be a terrible shame for a design's efficiency > > to suffer for lack of understanding on the part of the designer. > > > > [1] There's More Than One Way To Do It > > -- > > David Miller, BCMS (Hons) | When something disturbs you, it isn't the > > Endace Measurement Systems | thing that disturbs you; rather, it is > > Mobile: +64-21-704-djm | your judgement of it, and you have the > > Fax: +64-21-304-djm | power to change that. -- Marcus AureliusArticle: 38886
I would make a rough guess of the reasonable device utilization. Count the flip-flops and compare that to the 4% and the 50% utilization. Either one must be unrealistic. Maybe the synthesizer spirited things away... Peter Alfke, Xilinx Applications ====================== Sudip Saha wrote: > Hi, > I have four modules/entities in my design. when I am synthesizing the first module it gives 4% of device utilization. But when I integrate it with other modules means I synthesize the whole design tha individual module takes 50% of device(which was previously taking 4%). what can be the probable cause for it? > > waiting for your answer.Article: 38887
Why should the order of generating 16 separate 1-bit values matter? What fault could make them non independent? Peter Alfke wrote: > > Sorry, Russell, no job offer. ;-) > > There are 2, 3, 4-input AND, NAND, OR, NOR, XOR, XNOR functions and all sorts of > combinations of them, including multiplexers. > And then you can have an inversion on any input, etc. > > But the better reasoning goes like this: > To test the functionality 100%, one must obviously apply sixteen "test vectors" to > the group of four inputs. At the output, this will create a sequence of 16 bits. > Treat that as a word, and, voila, there are 64K different 16-bit words. Yes, some > are stupid ( stuck High, etc) and some are the result of input permutations, but > there are still a lot of useful functions. > > I started using this as an interview test question 30 years ago at Fairchild, long > before Xilinx, and I have run it by many, many hundreds of applicants. The > difference in approaches to the solution is astounding. From a correct answer in > 5 seconds to a half-hour argument... > Afterwards most candidates agreed that this is a good test of logic thinking. > > Come to think of it, it is even a smart and efficient way to implement logic on a > programmable chip... > > Peter Alfke, Xilinx Applications > > Russell Shaw wrote: > > > Well, i would have said there's 16 entries, and the permutations don't > > matter (just affects the order the inputs are connected).Article: 38888
Ok, it might be easier to find a short or something, but the original question wasn't about testing. Russell Shaw wrote: > > Why should the order of generating 16 separate 1-bit values > matter? What fault could make them non independent? > > Peter Alfke wrote: > > > > Sorry, Russell, no job offer. ;-)Article: 38889
Kevin Goodsell <goodsell@bridgernet.com> wrote in message news:<k4t55u4lf2n5bnppbagfog8e8jbqcipesj@4ax.com>... > Thanks for the additional comments. I've discovered a few things since > my last post: > > The lines I was using as inputs to the FPGA are at least part of the > problem - they seem to be too weak. I added non-inverting buffers and > it *almost* works. Strangely, the FPGA seems to be registering > positive clock edges where it shouldn't. Sometimes this is on a > negative edge, sometimes not. It seems like the clock is too noisy, > but it looks pretty clean on the oscilloscope. I tried creating a > Schmitt trigger using the trick with two resistors and an extra I/O If you have a slow clock and look at it on the oscilloscope, it may look perfect cause you have the timescale zoomed out. Measure the rise time and the fall time of your clock. I suspect it is too slow, and the cmos inputs are interpreting this as multiple clocks per transition. NewmanArticle: 38890
i am freshman,and i need do a 18bit counter,how do i do it is the best on area and timing and resource utilize?My device is Xilinx Virtex2-6000. is the following code OK? *********************************** reg [17:0] counter wire enable; always @(posedge Clk or negedge Rst_N ) begin if(!Rst_N) counter<=0; else if(enable) counter<=counter+1; else counter<=counter; endArticle: 38891
Hi - What's the bandwidth of your 'scope? You're going to need something on the order of at least 700MHz bandwidth for the probe and scope combined (and I'm talking bandwidth, not sampling rate) to see a 1ns-wide glitch on a clock edge. Those 100MHz-bandwidth scopes can make a glitchy signal look mighty fine. Sometimes it's a good idea to keep one in the lab just to boost morale. Bob Perlman -- Cambrian Design Works digital design, signal integrity http://www.cambriandesign.com e-mail: respond to bob at the domain above. On Sat, 26 Jan 2002 18:42:19 GMT, Kevin Goodsell <goodsell@bridgernet.com> wrote: >Thanks for the additional comments. I've discovered a few things since >my last post: > >The lines I was using as inputs to the FPGA are at least part of the >problem - they seem to be too weak. I added non-inverting buffers and >it *almost* works. Strangely, the FPGA seems to be registering >positive clock edges where it shouldn't. Sometimes this is on a >negative edge, sometimes not. It seems like the clock is too noisy, >but it looks pretty clean on the oscilloscope. I tried creating a >Schmitt trigger using the trick with two resistors and an extra I/O >pin, and while I'm not positive that I did it correctly, it made no >apparent difference in what I saw on the output pins using a logic >analyzer. > >I created an even simpler design just to see when the FPGA was >registering a positive clock edge by toggling a pin on each positive >edge, something like this: > >assign output_wire = output_reg; > >always @(posedge clk) > output_reg = output_reg ^ 1; > > >This fails even worse than the shift register. The output wire usually >just mimics the clock, going high on the positive edge and low on the >negative edge. Occasionally it seems to "miss" an edge and remain >unchanged. But when it goes high, it's always on the positive edge, >never the negative, and when it goes low it's always on the negative >edge, never the positive. > >So I'm puzzled. > >-KevinArticle: 38892
On Sun, 27 Jan 2002 19:53:24 -0800, tollyska <dhg@woegt.sdhif> wrote: >i am freshman,and i need do a 18bit counter,how do i do it is the best on area and timing and resource utilize?My device is Xilinx Virtex2-6000. >is the following code OK? >*********************************** >reg [17:0] counter >wire enable; >always @(posedge Clk or negedge Rst_N ) >begin > if(!Rst_N) > counter<=0; > else if(enable) > counter<=counter+1; > else > counter<=counter; >end It is OK except you don't need the final else block. counter remembers its value when there are no events which change it. Muzaffer Kal http://www.dspia.com DSP algorithm implementations for FPGA systemsArticle: 38893
What is your board like? Is the FPGA properly decoupled, and does it have good low impedance paths from the power plane? HOw about your signal integrity, especially on the clock. Are the edges really clean (keep in mind the chip is fast enough to react to a sub-nanosecond pulse on the clock)? Are the clock levels correct? Do you see any evidence of ground bounce on the output signals? Are your edge rates fast enough (if being driven by some of the older CMOS families, they may not be)? Kevin Goodsell wrote: > Thanks for the additional comments. I've discovered a few things since > my last post: > > The lines I was using as inputs to the FPGA are at least part of the > problem - they seem to be too weak. I added non-inverting buffers and > it *almost* works. Strangely, the FPGA seems to be registering > positive clock edges where it shouldn't. Sometimes this is on a > negative edge, sometimes not. It seems like the clock is too noisy, > but it looks pretty clean on the oscilloscope. I tried creating a > Schmitt trigger using the trick with two resistors and an extra I/O > pin, and while I'm not positive that I did it correctly, it made no > apparent difference in what I saw on the output pins using a logic > analyzer. > > I created an even simpler design just to see when the FPGA was > registering a positive clock edge by toggling a pin on each positive > edge, something like this: > > assign output_wire = output_reg; > > always @(posedge clk) > output_reg = output_reg ^ 1; > > This fails even worse than the shift register. The output wire usually > just mimics the clock, going high on the positive edge and low on the > negative edge. Occasionally it seems to "miss" an edge and remain > unchanged. But when it goes high, it's always on the positive edge, > never the negative, and when it goes low it's always on the negative > edge, never the positive. > > So I'm puzzled. > > -Kevin -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759Article: 38894
Was the original question was about FPGA Editor, and whether or not the user can see logic gates and FFs? Or, was the original question about the input order of a LUT? If the original question is whether or not the input order of a LUT will change the result, and my answer to that question is yes. Let's say there is a 4-input function X= ACD# + A#BC#D + AB# + ABD#. If you swap B and C (That is, an input going into B will now go into C, and an input going into C will now go into B.), the result will be totally different. The same rule will still apply to a LUT. Kevin Brace (Don't respond to me directly, respond within the newsgroup.) Russell Shaw wrote: > > Ok, it might be easier to find a short or something, > but the original question wasn't about testing. > > Russell Shaw wrote: > > > > Why should the order of generating 16 separate 1-bit values > > matter? What fault could make them non independent? > > > > Peter Alfke wrote: > > > > > > Sorry, Russell, no job offer. ;-)Article: 38895
Kevin, here is a simple circuit that tells you whether you have multipe clock pulses or not: Just clock a toggling flip-flop, or a 2-stage binary counter if you want to be fncy, and look at the Q. it will show you wheter you have single or double clock pulses. Peter Alfke, Xilinx Applications ====================== Kevin Goodsell wrote: > Thanks for the additional comments. I've discovered a few things since > my last post: > > The lines I was using as inputs to the FPGA are at least part of the > problem - they seem to be too weak. I added non-inverting buffers and > it *almost* works. Strangely, the FPGA seems to be registering > positive clock edges where it shouldn't. Sometimes this is on a > negative edge, sometimes not. It seems like the clock is too noisy, > but it looks pretty clean on the oscilloscope. I tried creating a > Schmitt trigger using the trick with two resistors and an extra I/O > pin, and while I'm not positive that I did it correctly, it made no > apparent difference in what I saw on the output pins using a logic > analyzer. > > I created an even simpler design just to see when the FPGA was > registering a positive clock edge by toggling a pin on each positive > edge, something like this: > > assign output_wire = output_reg; > > always @(posedge clk) > output_reg = output_reg ^ 1; > > This fails even worse than the shift register. The output wire usually > just mimics the clock, going high on the positive edge and low on the > negative edge. Occasionally it seems to "miss" an edge and remain > unchanged. But when it goes high, it's always on the positive edge, > never the negative, and when it goes low it's always on the negative > edge, never the positive. > > So I'm puzzled. > > -KevinArticle: 38897
Russell, the circuit has 4 inputs, and its functionality is therefore described by how it reacts to the 16 different patterns that might ever appear on its four inputs. Anything less is insufficient, anything more is redundant. Just think about it, and it will become obvious... Peter Alfke ========================== Russell Shaw wrote: > Ok, it might be easier to find a short or something, > but the original question wasn't about testing. > > Russell Shaw wrote: > > > > Why should the order of generating 16 separate 1-bit values > > matter? What fault could make them non independent? > > > > Peter Alfke wrote: > > > > > > Sorry, Russell, no job offer. ;-)Article: 38898
I am sure there are various explanations of why the manual is vague about the assertion timing of COMPLETE, like a sloppy job of whoever wrote the manual, but I will view it as providing too much details will make it too easy for someone else who is trying to clone Xilinx LogiCORE PCI. Kevin Brace (Don't respond to me directly, respond within the newsgroup.)Article: 38899
I've the following message from Xilinx ISE, what to do about the warning " Offset is -6.400ns. Negative offset in this situation may cause a hold violation." -------------------------------------------------------------------------------- Release 4.1.03i - Trace E.30 Copyright (c) 1995-2001 Xilinx, Inc. All rights reserved. trce -e 3 -l 3 -xml ROM_polyphase ROM_polyphase.ncd -o ROM_polyphase.twr ROM_polyphase.pcf Design file: rom_polyphase.ncd Physical constraint file: rom_polyphase.pcf Device,speed: xcv1000,-4 (FINAL 1.115 2001-06-20) Report level: error report -------------------------------------------------------------------------------- ================================================================================ Timing constraint: NET "clk_ibuf/IBUFG" PERIOD = 20 nS HIGH 50.000000 % ; 52 items analyzed, 0 timing errors detected. Minimum period is 13.908ns. -------------------------------------------------------------------------------- ================================================================================ Timing constraint: TS_clk = PERIOD TIMEGRP "clk" 20 nS HIGH 50.000000 % ; 0 items analyzed, 0 timing errors detected. -------------------------------------------------------------------------------- ================================================================================ Timing constraint: OFFSET = IN 20 nS BEFORE COMP "clk" ; 1 item analyzed, 0 timing errors detected. Offset is -6.400ns. Negative offset in this situation may cause a hold violation. -------------------------------------------------------------------------------- All constraints were met. Data Sheet report: ----------------- All values displayed in nanoseconds (ns) Setup/Hold to clock clk ---------------+------------+------------+ | Setup to | Hold to | Source Pad | clk (edge) | clk (edge) | ---------------+------------+------------+ to_SRRC_I | 3.600(F)| 0.000(F)| ---------------+------------+------------+ Clock to Setup on destination clock clk ---------------+---------+---------+---------+---------+ | Src:Rise| Src:Fall| Src:Rise| Src:Fall| Source Clock |Dest:Rise|Dest:Rise|Dest:Fall|Dest:Fall| ---------------+---------+---------+---------+---------+ clk | | 6.954| | 7.340| ---------------+---------+---------+---------+---------+ Timing summary: --------------- Timing errors: 0 Score: 0 Constraints cover 53 paths, 0 nets, and 55 connections (74.3% coverage) Design statistics: Minimum period: 13.908ns (Maximum frequency: 71.901MHz) Analysis completed Sat Jan 26 16:31:26 2002 --------------------------------------------------------------------------------
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z