Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Hi folks, Can someone illuminate for me the physical interpretation of the OFFSET IN xx BEFORE and OFFSET OUT xx AFTER constraints? I'm happy with the PERIOD constraint, that specifies the maximum delay along a path between any two sequential elements (registers etc), and I grasp the physical interpretation of how that relates to the maximum clock speed. I think I understand the OFFSET IN xx BEFORE constraint, basically saying the data will be on the pad XX ns before the clock goes high - this is like a setup time right? However, the OFFSET OUT xx AFTER constraint is really confusign me. Does this specify the maximum path delay from the Q output of a register to the output pad? In the dummy pipeline design that I'm experimenting with on a Virtex, I'm finding that I can meet a period constraint of 100MHz no worries, but the OFFSET IN BEFORE and OUT AFTER constraints don't make sense, I'm getting minimum input delays of around 7 ns and output delays > 10ns. With my design, the input path from a pad is straight into a register, and similarly the output path is straight out of a register to the pad - why am I unable to constrain the OFFSET IN BEFORE and OFFSET OUT AFTER values to around 2 or 3 ns? I've got the P&R tools on maximum extra effort. Is there somewhere a worked example/tutorial illustrating the use of these constraints? I've scoured the Xilinx website and found notes and papers that talk about them, but haven't seen any worked examples. Thanks, JohnArticle: 43876
i have a design in hand where i have to give output of a counter to some other module after inverting lsb 1. The relevent code looks like this. rest of code is purely srtuctural --s_out output of counter s_lsb <= not s_out (0) ; s_out_1 <= s_out(4 downto 1) & s_lsb; s_out_1 will be connected to the other module but when i see the RTL view, i cannot see the inverter. The synthesis tool somehow (synplicity) optimzes the inverter which is not intended. can some body throw some light why it happens? i took a primary output and assigned s_lsb to it. it keeps the inverter. why so? TIA Kuldeep ps : i earlier posted the same message with subject "synthesis problem" it got posted into some old thread.Article: 43877
Joe, Thanks for the insight. I have to admit, Altera's scheme which encrypts a file seems a lot more secure than Xilinx's method, which is to convert a netlist into a proprietary format, but not an encrypted format. However, it looks like an ordinary user like myself cannot convert an EDIF netlist into an Altera proprietary netlist format, so I will probably have to give up on distributing an EDIF netlist for Altera devices. Kevin Brace (In general, don't respond to me directly, and respond within the newsgroup.) Joe wrote: > > > The Altera IPs are encrypted using an encryption program that is > available only to AMPP members (Altera Mega-function Partnership Program > or something like that). The program also come with a license generator > that generates the licenses required to unlock the design. > An IP provider can then encrypt their IP and then sent it to customer, > as well as a license file, and the customer have to add the content of > this license file to their MAXPLUS II license file. > The IP can be any text file format. > > JoeArticle: 43878
John, Thanks for the reply. Surely if I use a CLKDLL then the relationship between the two clocks and the clock to out can be determined deterministically (!) and we can therefore know if it will work or not? Cheers, Ken "John_H" <johnhandwork@mail.com> wrote in message news:3CF79624.E49140E6@mail.com... > Without losing data or inserting junk in your 100 MHz multiplexed stream, the 25 > MHz has to be related to the 100 MHz in *some* fashion. If the relationship > between these clocks allows good clk-to-out at 25 MHz relative to the setup and > hold at 100 MHz, accounting for the skew and jitter between the two domains, > everything works. If you don't know the relationship, only that they're phase > locked, a short FIFO would be the cleanest implementation with a "half full" as > the startup state so the FIFO doesn't over or under fill.Article: 43879
Greetings, My code in a xi 4000 FPGA has to listen to 2 reset signals (1 coming from a power on and one coming from a vmebus). Reset_N <= '0' when (Reset_pow = '0' or Sysres = '0') In the toplevel I or the 2 reset signals (not in a clocked process so just asynchronous). The result of this or function is then distributed towards several VHDL blocks of the with classic structure: somewhere in toplevel: Reset_N <= '0' when (Reset_pow = '0' or Sysres = '0')else '1'; some block where it is distributed to: process(Clock, Reset_N) begin if (Reset_N = '0') then ... elsif(Clock'event and Clock = '1') then ... end if; end process; This compiles with foundation and runs without problem BUT: In some systems there is a very slow Reset and it seems this creates sometimes a metastable, uhm, something ... To overcome this I want to put 1 or more flip flops to the Reset_N signal so I tried the following in the top level: Reset_N_INTERMEDIATE <= '0' when (Reset_pow = '0' or Sysres = '0')else '1'; And then I add extra process to top level: process(Clock, Reset_N_INTERMEDIATE) begin if(Reset_N_INTERMEDIATE = '0') then Reset_N <= '0'; elsif(Clock'event and Clock = '1') then Reset_N <= Reset_N_INTERMEDIATE; end if; end process; And now Foundation just doesn't want to take it, it gives pages of warnings with things like warning Reset_pow does not set/reset ... warning Sysres does not set/reset ... all together with the flip flop latch warnings I don't get the errors when I implement this in a Virtex. How can I overcome the resetting problem on an 'old' xi 4000? Maybe I can play a bit with characteristics of the input pins? What can I do?Article: 43880
Jesse Kempa <kempaj@yahoo.com> wrote: > Hi, > Interesting discussion, a couple of points: <snip> > The apex chips are really scant when it comes to memory, but the > memory that is there is of a much higher speed compared to traditional > processor instruction memory... running anything beyond a basic > software application out of FPGA memory can be likened to running a > car on rocket fuel :) Since there isn't much of it (1 LE = 1 register, > and then there is the embedded memory arrays, ESBs I think they call > them), memory on an apex chip for storing code is very expensive - > better to find a way to scrape up the IOs for a single tri-state bus > interface! The literature I've read says that the newer architecture > (starix) has much more embedded memory (~10x more than an apex chip of > the same density). ACK. But I did not want to spend to much effort in the NIOS - I just wanted to use it if it is simple. In my application I do not really need it, I can solve my problems with a VHDL state machine and consume much less of the expensive APEX ressources. Then the NIOS remains only left as a comfortable debugging interface. In a previous post I also pointed out that the Stratix- devices have much more embedded memory and therefore the use of internal memory can be an option. <snip> Roman > - JesseArticle: 43881
In article <a0f016a9.0206042045.5c1acf9d@posting.google.com>, kuldeep <kkdeep@mailcity.com> writes >i have a design in hand where i have to give output of a counter to >some other module after inverting lsb 1. >The relevent code looks like this. rest of code is purely srtuctural >--s_out output of counter >s_lsb <= not s_out (0) ; >s_out_1 <= s_out(4 downto 1) & s_lsb; >s_out_1 will be connected to the other module > >but when i see the RTL view, i cannot see the inverter. The synthesis >tool somehow (synplicity) optimzes the inverter which is not intended. >can some body throw some light >why it happens? >i took a primary output and assigned s_lsb to it. it keeps the >inverter. why so? >TIA >Kuldeep > >ps : i earlier posted the same message with subject "synthesis >problem" it got posted into some old thread. Generally my experience is that the synthesis tool is correct and I am wrong when this kind of thing happens! I would check a) is bit s_out(0) used anywhere b) does bit s_out(0) change during simulation (for instance perhaps your counter is incrementing 2 steps at a time??) c) has Synplicity done something clever to the counter so it doesn't need the inverter? regards Alan -- Alan Fitch DOULOS Ltd. Church Hatch, 22 Market Place, Ringwood, Hampshire BH24 1AW, United Kingdom Tel: +44 1425 471223 Email: alan.fitch@doulos.com Fax: +44 1425 471573 Web: http://www.doulos.com ********************************** ** Developing design know-how ** ********************************** This e-mail and any attachments are confidential and Doulos Ltd. reserves all rights of privilege in respect thereof. It is intended for the use of the addressee only. If you are not the intended recipient please delete it from your system, any use, disclosure, or copying of this document is unauthorised. The contents of this message may contain personal views which are not the views of Doulos Ltd., unless specifically stated.Article: 43882
Hello, If I get Xilinx Core generator to generate a core and it reports the following for its footprint: "1,471 LUT sites used, 2,013 register sites used" How can I calculate (roughly) how many slices it will use when APR'd? Can I say 2013/2 = 1006.5 slices since there are two register sites per slice? Would this be reasonable? Also, is there any way to get the tools to give me a real slice cost for the core without having to go through the whole UCF process etc to avoid the core being optimised away since it would not be connected to anything? Thanks for your time, KenArticle: 43883
John wrote: > > > > > Every rational number can be expressed as a repeating expansion > > ( I recall from the dimness of a high school math course, don't > > ask me to dig deep and prove it, I just build the stuff now ) > > > > 1/11 is an interesting case, the repeating pattern is > > > > In fact the true statement is that the expansion of any rational number to any radix is either finite or *eventually* drops into a repeated pattern (since there are only r possible remainders for radix = r). However there will in general be an initial digit sequence before the periodicity starts. To show that the binary expansion of 1/N for N odd is always purely periodic you just have to note that for all possible exponents E there are only N possible remainders (2**E - 1) % N. Choose 2 exponents E1 < E2 with the same remainder R: 2**E1 - 1 = N * K1 + R 2**E2 - 1 = N * K2 + R Then (2**E1) * (2**(E2-E1) - 1) = N * (K2 - K1) Since N is odd we must have 2**E1 dividing (K2 - K1) => N divides (2**(E2-E1) - 1). Which is what we wanted. [works for radixes (radices ? radixen ?) that are powers of a prime p and N%p != 0]. For N even we have, for some E, N = (2**E) * M with M odd so the binary expansions of 1/N for even N are a number of 0's followed by a periodic pattern. e.g. 1/12 = .000101010101...Article: 43884
John Williams <j2.williams@qut.edu.au> wrote: > I think I understand the OFFSET IN xx BEFORE constraint, basically > saying the data will be on the pad XX ns before the clock goes high - > this is like a setup time right? Yes. The constraint tells the tools the maximum amount of setup time you will tolerate. > However, the OFFSET OUT xx AFTER constraint is really confusign me. > Does this specify the maximum path delay from the Q output of a register > to the output pad? That's right. You want to minimise the clock to out delay so as to maximise setup time at the receiver. > In the dummy pipeline design that I'm experimenting with on a Virtex, > I'm finding that I can meet a period constraint of 100MHz no worries, > but the OFFSET IN BEFORE and OUT AFTER constraints don't make sense, I'm > getting minimum input delays of around 7 ns and output delays > 10ns. > With my design, the input path from a pad is straight into a register, > and similarly the output path is straight out of a register to the pad - > why am I unable to constrain the OFFSET IN BEFORE and OFFSET OUT AFTER > values to around 2 or 3 ns? I've got the P&R tools on maximum extra > effort. Are the input and output registers being placed in the IOBs? You can find out in the MAP report. That will cut down your input/output delays significantly; expect a couple of nanoseconds or less of input. However, output delays are quite long. They are dependent on the IO type, including (where applicable) slew rate and drive strength parameters. eg fast 24mA drive LVTTL has > 2 ns less clock-to-out than the default slow 12mA drive. Better still for HSTL etc. Hamish -- Hamish Moffatt VK3SB <hamish@debian.org> <hamish@cloud.net.au>Article: 43885
Thanks everyone! The bug has been traced to some internal FIFO overflow and (hopefully) will be fixed soon. However, I appreciate all the help and have found the information very valuable. Thanks again, Nimrod.Article: 43886
If the particular coregen macro is placed in the source, then the number of slices can be known. Otherwise, it depends on how the flip-flops and luts get packed during the xilinx mapping. The slice count will lie between half the larger of the register or lut sites as a lower bound and the sum of the lut sites and register sites as an upper bound. Realistically, you'll get somewhere around 70% packing. If you just run through the map, you'll get a detailed report of the slice usage. You can run the core through under a wrapper that connects all its I/O to FPGA pins, that way it will take out any logic unused in your application without taking out stuff you'll be using. Keep in mind that the slice count may change due to differnt packing when you add it to your design. Ken Mac wrote: > Hello, > > If I get Xilinx Core generator to generate a core and it reports the > following for its footprint: > > "1,471 LUT sites used, 2,013 register sites used" > > How can I calculate (roughly) how many slices it will use when APR'd? > > Can I say 2013/2 = 1006.5 slices since there are two register sites per > slice? > > Would this be reasonable? > > Also, is there any way to get the tools to give me a real slice cost for the > core without having to go through the whole UCF process etc to avoid the > core being optimised away since it would not be connected to anything? > > Thanks for your time, > > Ken -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759Article: 43887
Ray, Thanks for your swift reply, <snip> > The slice count will lie between half the > larger of the register or lut sites as a lower bound and the sum of the lut > sites and register sites as an upper bound. <snip> So, the slice count will lie between 2013/2 and (1471+2013)/2 (or did you just mean 1471+2013 as the upper bound?) This is good enough for an estimate at the moment - thanks very much. Ken > Ken Mac wrote: > > > Hello, > > > > If I get Xilinx Core generator to generate a core and it reports the > > following for its footprint: > > > > "1,471 LUT sites used, 2,013 register sites used" > > > > How can I calculate (roughly) how many slices it will use when APR'd? > > > > Can I say 2013/2 = 1006.5 slices since there are two register sites per > > slice? > > > > Would this be reasonable? > > > > Also, is there any way to get the tools to give me a real slice cost for the > > core without having to go through the whole UCF process etc to avoid the > > core being optimised away since it would not be connected to anything? > > > > Thanks for your time, > > > > Ken > > -- > --Ray Andraka, P.E. > President, the Andraka Consulting Group, Inc. > 401/884-7930 Fax 401/884-7950 > email ray@andraka.com > http://www.andraka.com > > "They that give up essential liberty to obtain a little > temporary safety deserve neither liberty nor safety." > -Benjamin Franklin, 1759 > >Article: 43888
in the virtex 2 it's said that the IOBUF tristae buffers are active high , but the in the unisims model it acts as active low , which is right ?Article: 43889
Eyal, This has been here (on this newsgroup) before, but Peter is out of town, so I will see if I can repeat his comment faithfully. Tristate = 1 means that it is in the tristate condition, and you must bring it to 0 (low) to enable the driver. This matches the fpga_editor view of the IOB, so I am inclined to believe this. There always seems to be some confusion, is it true to drive (then it isn't a tristate, but a tristate_b input, or an enable input), or is it true to be tristate (then it is a tristate, or an enable_b input)? So do you mean that a '0' enables the output, or do you mean that a '0' causes the output to be tristate? Austin Eyal Shachrai wrote: > in the virtex 2 it's said that the IOBUF tristae buffers > are active high , but the in the unisims model it acts > as active low , which is right ?Article: 43890
and remember, a Corleone knows how to returna a favor!Article: 43891
Your place and route should have the option of "packing IOB registers [None | Inputs Only | Outputs Only | Inputs and Outputs]." Use of DLLs can improve your clock-to-out time. Sometimes delays are introduced on the inputs to make sure there's positive hold - I've found these delays sometimes muck up other parts of my logic. Look in the FPGA editor to see if the input IOB pads feed through the delay element or not. You can use the NODELAY constraint in the ucf to get rid of delays you don't want. If you can go straight into registers and straight back out again you're in reasonable shape. I have to work desperately hard to get my inputs through one level of logic to a registered output with a total OFFSET IN BEFORE pluse OFFSET OUT AFTER (setup + Tco) under 7ns in a Spartan-II speedgrade 6 (for a point of reference). The outputs need to be specified as FAST outputs, often through the ucf, and if you're using LVTTL (or Virtex-II LVCMOS) you can change the drive levels - more current gives better times. There is a theoretical minimum for your setup and Tco times that you can get from the speed files. On the unix tool set, the "speedprint" utility gives me numbers to help me find my minimums.Article: 43892
Falk Brunner (Falk.Brunner@gmx.de) wrote: : "John Eaton" <johne@vcd.hp.com> schrieb im Newsbeitrag : news:adh2p5$38c$1@news.vcd.hp.com... : > : > I have it on good authority that connecting a 3.3 volt 1M gate part to a : > flakey connector that shorts I/O pads to 5 volts WILL destroy the FPGA. : > : > : > And in the true interest of science the folks that performed that little : > experiment then verified that it was reproduceable. : So they are real scientists. They have reproducable results ;-) : More serious, I think overvoltage was not the question here, was it? ------------------------------------------------- It was latch up. Some 3.3 volt outputs were shorted to +5 and sent the whole chip into melt down mode. We also have a new crop of software engineers who don't fully understand how to handle breadboards. You have to really babysit them and be ready to kill the power the INSTANT you hear,see,smell,taste or feel anything odd. The new guys will power them up and then run back to their cube and log in remotely to a debug session. About 1/2 hour later somebody will walk by and ask them if they know that their breadboards on fire. John EatonArticle: 43893
If you use the CLKDLL to generate one clock (or both) you will absolutely be able to make the transition between domains. The metastability question does comes into play. Ray Andraka has pointed out how the clock net loading and the jitter in the DLLs can make the destination edge happen after the source signal has already transitioned the the next cycle, catching the wrong phase of the signal. It may be better to choose a "safe" alignment of the signals so you have a known delay (choose a different phase of the 100MHz clock) that isn't destroyed by jitter and skew. If your 25MHz data path runs off the 100MHz clock with clock enables every four cycles, your implementation with be clean - no problems with two clock domains since the edged aren't related - they're the same! Ken Mac wrote: > John, > > Thanks for the reply. > > Surely if I use a CLKDLL then the relationship between the two clocks and > the clock to out can be determined deterministically (!) and we can > therefore know if it will work or not? > > Cheers, > > Ken > > "John_H" <johnhandwork@mail.com> wrote in message > news:3CF79624.E49140E6@mail.com... > > Without losing data or inserting junk in your 100 MHz multiplexed stream, > the 25 > > MHz has to be related to the 100 MHz in *some* fashion. If the > relationship > > between these clocks allows good clk-to-out at 25 MHz relative to the setup > and > > hold at 100 MHz, accounting for the skew and jitter between the two > domains, > > everything works. If you don't know the relationship, only that they're > phase > > locked, a short FIFO would be the cleanest implementation with a "half > full" as > > the startup state so the FIFO doesn't over or under fill.Article: 43894
John Williams <j2.williams@qut.edu.au> wrote in message news:<3CFD82AB.C7B4B383@qut.edu.au>... John, Here are a few thoughts that might help you out. A 7 ns set up time would appear to be excessive. To get a really good set up time, one should try to register the input in the IOB register if possible. One cannot have any logic between the input pad and the register for this to happen. Also, one has to enable the register inputs and outputs property in the implementation phase. Similar thoughts for registering the output in the IOB register. If you can use the DLL, this can shorten the clock delay, so the FPGA input clock to registered output delay is less. This may increase the setup delay. You can go into the FPGA editor to verify the the IOB registers are being used in your implementation. NewmanArticle: 43895
my question refer to the virtex2 fpga's registers initial value after configuration : does it the same as the values that are set by the reset signal?Article: 43896
There was a paper on FPGA viruses at FPL in 99 (I think) that went into this kind of thing.. One of the best way to do this is to build and bunch of ring oscillators. Just link a bunch of inverters in a circle and you'll get a good chip fryer. When we were building boards with the 6200 part one customer did just that with a home grown tool, and blew up the device in no time. One of the cases where to much knowledge can hurt you when it come to having the fully documented bit stream information. Of course later on that tool was able to evolve the "best" 16 bit sorter. Steve Casselman "Michael Boehnel" <boehnel@iti.tu-graz.ac.at> wrote in message news:3CFB4771.F64A2370@iti.tu-graz.ac.at... > Hello! > > Is it possible to kill (thermically destroy) an FPGA by a highly > optimized design (hand-placed; high-density; litte unrelated logic) > assuming that interface lines are OK/room temperature? > > Did anybody observe such a behavior? > > Regards > > Michael > >Article: 43897
Eyal Shachrai wrote: > my question refer to the virtex2 fpga's registers initial > value after configuration : does it the same as the values > that are set by the reset signal? If you have a single async reset, yes. A synchronous reset is sometimes synthesized with the reset in the logic resulting in the initial state being unrelated to the sync reset state. If you have registers which reset high but you have another async clear on those registers, the clear dominates over the preset for a config low state. You can override the power-up state by using the INIT constraint in the user constraints file. Simple rules: If a register has a single set/preset or reset/clear, it will power up to that state. If a register has no set/preset or reset/clear, it will power up low. If a register has both a set/preset and a reset/clear, it will power up low. If you specify an INIT constraint in the .ucf you can override these default rules. IOBs may behave a little differently. I recall a slight "gotcha" in previous families but nothing serious.Article: 43898
"Roger King" <roger@king.com> schrieb im Newsbeitrag news:BlfL8.215525$t8_.150019@news01.bloor.is.net.cable.rogers.com... > When you burn a design on an FPGA(Altera 7000) is it hard-wired? Can it be THe MAx7000 is a CPLD series. CPLDs are the little brothers of FPGAs. > changed internally, without the assistance of a PC? Like can one design some > circuit that can change itself? Maybe like neural networks... or some other > AI stuff... Thanks Hmm, theoretical yes. But Iam doubtfull if this will work out really good. A FPGA can not DIRECTLY change its own design, but another FPGA/uC can reprogramm it. -- MfG FalkArticle: 43899
"Kevin Brace" <ihatespam99kevinbraceusenet@ihatespam99hotmail.com> schrieb im Newsbeitrag news:adk8n9$vab$1@newsreader.mailgate.org... > I tried what you wrote with ISE WebPACK 4.2WP2.0, and although when > synthesis starts, XST will display that it is going to generate an EDIF > and an NGC netlist, at the end, it will only generate an EDIF netlist. > I have to admit, being able to do everything from ISE's GUI is a lot > more convenient. > However, isn't this option to manually add XST command line options from > ISE's GUI new? Ahhm, it looks like that the commercial version of ISE (4.2, SP2) dont offer this :-( Anyone got a clue? -- MfG Falk
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z