Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
On Sat, 16 Mar 2002 06:20:46 -0000, hmurray-nospam@megapathdsl.net (Hal Murray) wrote: >A more modern version is: > > High-Speed Digital System Design > A Handbook of Interconnect Theory and Design Practices > by Hall, Hall, and McCall I'll take a look at it, too. Thanks. I agree about your comments on Johnson and Graham's book. JonArticle: 40826
Dan wrote: > Hi Eric, > > Good point about all the FFs in an IOB needing the same clk. > > Well, my IOB=TRUE constraint did not get implemented. Do you know what else > might have caused this problem ? > > Sincerely > Daniel DeConinck > www.PixelSmart.com > TEL: 416-248-4473 The IOB packing problem comes up on this NG many times so you could try an archive search but a brief summary, for Virtex/Spartan2 is: o No logic before an input FF. o No logic after an output FF - not even an inverter. o No feedback from an output FF. (*) For BiDirs (almost all PCI signals are such). o A common clock. o A common initialisation signal. The function - async/sync set/reset - of this signal can differ between the input and output FFs. The easiest way in which this can be broken is to have one FF with an init condition and the other without (+). I've never really had to force the use of the tri-state FF so I'm not sure what the exact rules for it are. Note that even if you think you are keeping to the rules synthesis tools can break them behind your back esp. (*) & (+). Having kept to all these rules its then best to use the `-pr b' flag to MAP which tells it to IOB pack FFs if at all possible. You can then use the `IOB=FALSE' attribute to exclude ones you don't want packed. Its rare that you'll need to do this but I can give you my example: The incoming IO data bus feeds 3 registers, one "ordinary" one going into the main data path (in the IOB) and 2 async clocked ones for IDE UDMA (not in the IOB). Now the hard part: Finding the ones that should have been packed but weren't ...Article: 40827
"Falk Brunner" <Falk.Brunner@gmx.de> wrote in message news:<a6qsl0$gikto$3@ID-84877.news.dfncis.de>... > "Markus Meng" <meng.engineering@bluewin.ch> schrieb im Newsbeitrag > news:aaaee51b.0203132355.60183b6b@posting.google.com... > > Hi all, > > > > on the Xilinx documentation I do find, that the bitstream > > format has a minimum of 8 '1' at the very beginning of > > the serial bitstream. Is it possible to extend the minimum > > 8 of '1' to any higher number before the start-pattern > > is being applied? > > I dont know at all. Why do you want to do this? The SpartanXL are known to > be a little bit more sensitive on configuration data clocking, since the use > a length counter encoded into the first bits of the datastream. > Spartan-II(E) are much easier to use, since the configuration is much > simplified. Hi Falk, Using the XC2S300E for example ... I'am shure you know that the ISP configuration PROM from Xilinx does cost ~ 20 US$ in small quantities. Now I'am working on a solution for 2$! in small quantities. That's why I want to know ... I got a feedback from a Xilinx support engineer, that explained me that I need to adjust the total length count when I add '1' at the beginning. Is that your experience as well? For me it sounds somewhat strange, since clocking in '1' before the start- preamble looks like not being important. Maybe I'am wrong ... markusArticle: 40828
I've just had my main box in the office upgraded to 1.6GHz DDR Athlon XP (Yippee, I've had an ordinary DDR Athlon at home for a long time) but, according to the people who make up our PCs WinNT doesn't run reliably on the Athlon XP. As a result I've been forced, against my will, to abandon my tried & trusted WinNT-SP6A, which as been bomb & bullet-proof reliable for a long time and downgrade to Win2K. I've found some strange problems & I'm wondering if anyone can help: o The first question has to be - have I been given a load of BS about WinNT not running on the XP ? o I ran a PAR with no other things happening on the box. Looking at the post PAR report I can see that it took 31 minutes of CPU time for a quick 4 iteration route (non XP home Athlon =~ 36) **but** the `real' time as reported by PAR was 51 min. Under NT the `CPU' and `real' times are never more than a minute or so apart on a quiet machine, even with PAR set to low priority. The difference being accounted for, I'd guess, by file IO. Any ideas what the %^$£& Win2K was doing with those 20 min ? o When I run ModelSim I keep most of the project stuff, scripts, etc on a Unix box and export them to the PC via Samba. Under WinNT it made very little difference when I did `Open Project' where the .mrp project file was. Trying this last night under Win2K and it took ~10min just to load the project. I hate to think what's going to happen when it tries loading an SDF. BTW Does anyone other than my find the Windows `My Computer' and `My Network Places' stuff insufferably twee ?Article: 40829
Hi all, I forgot the mention that I'am interested only in master serial mode configuration. The CCLK's come from the FPGA ... markusArticle: 40830
In article <1103_1016120744@news.glue.umd.edu>, Nitin Chandrachoodan <nitin@eng.umd.edu> writes: >3. Can be used to implement reasonably large designs, >especially DSP filters, FFTs etc. Here we would like to >implement the designs without having to go to too much low- >level optimization (bit-serial implementations etc.) unless >absolutely necessary, as this would change the focus from >learning about FPGA implementation to low-level design. I'm curious. What does "learning about FPGA implementation" mean to you? Are you top-down type designer? Are you trying to teach how to use HDLs with FPGAs as compared to how to get the best performance out of an FPGA? I'm a bottom-up person. I want to know the grubby details, and the tricks for taking advantage of them - pipelining, duplicating logic, floorplanning... -- These are my opinions, not necessarily my employer's. I hate spam.Article: 40831
"Austin Lesea" <austin.lesea@xilinx.com> wrote in message news:3C90C799.E64C72C2@xilinx.com... > Virtex E was a shrink to 0.18u of the classic Virtex architecture and > circuitry. Virtex E added LVDS input buffers to the original Virtex > design, but little else was changed. I understand this was not the focus of the posting, but let us not overlook one of Virtex-E's most notable enhancements -- the -E's doubling, tripling, or quadrupling of the number of block RAMs per LUT, compared to Virtex. In base Virtex devices, there are two columns of block RAMs, one at the left edge and one at the right. In Virtex-E there are four (V50E-V400E), six (V600E-V1000E), or eight (V1600E-V3200E) columns of block RAMs. In block RAM constrained designs, this makes all the difference. Jan Gray, Gray Research LLCArticle: 40832
>I got a feedback from a Xilinx support engineer, that explained me that I need >to adjust the total length count when I add '1' at the beginning. Is that your >experience as well? >For me it sounds somewhat strange, since clocking in '1' before the start- >preamble looks like not being important. Maybe I'am wrong ... You should look in the data sheet or maybe an app note. There should be a reasonable description of the bit stream format so people can do things like this. On the older chips, there used to be a total-bit-count field in the header of the bit stream. After that many cclks, the configuration stage ended so (as support said) if you wanted to insert 1s, you should adjust the total count so it ends at the right time. Otherwise it will try to start running before you have sent it all the bits. I'm not sure if Vertex works this way - it complicates partial reconfiguration. But find the documentation... If the chip doesn't have a total-bit-count mechanism then your extra 1s (before the preamble) should not be a problem. -- These are my opinions, not necessarily my employer's. I hate spam.Article: 40833
Oops. I wrote: > the -E's doubling, tripling, or quadrupling of the number of block RAMs > per LUT, compared to Virtex. Scratch "quadrupling". There was of course no V1600-V3200 to compare V1600E-V3200E against. Sorry about that. Jan Gray, Gray Research LLCArticle: 40834
"Falk Brunner" <Falk.Brunner@gmx.de> writes: > "Magnus Homann" <d0asta@mis.dtek.chalmers.se> schrieb im Newsbeitrag > news:ltsn71cyyw.fsf@mis.dtek.chalmers.se... > > > > Is this differential? In that case I would go for daisychaining and > > > > termination at the end. SHORT stubs at intermediate devices. > > > > > > > > Homann > > > > > > No this is not differential. This is LVTTL. > > > > Ah, in that case I would ask my boss to be put on another project... > > You are a big Sissy. > > SCNR. ;-)) Nah, just lazy. > A little bit more serious, is a 100 MHz LVTTL clock propagating some inches > on a FR4 board that difficult to handle?? I mean, sure, lots of things can > go wrong if you dont know what you are doing, BUT Was it LVTTL? The headache start when you don't have point-to-point clocks. With P2P, it is much easier. Slap in a series termination resistor, and adjust as apropriate. Daisy-chaining works quite well, but it is a bit harder to route, and you get clock skew between devices. Star-topology? Well, in some cases. > Two guys in our company designed a board with a big communication processor, > with 3 fast SDRAM/SSRAM/ZBTRAM busses (100-133 MHz). They did NO simulation, > "just" had an eye at the layout and followed the basic guidelines that apply > on this kind of stuff. And you wont believe it, it worked on the first run, > almost perfect, just some minor modification of termination resistors and > some clock line (length) modification. As long as you know what you're doing. > Your comment, Austin?? ;-) > > > Or use a zero-delay clock buffer (PLL/DLL), if possible. > > No problem, there are at least 4 inside the FPGA, Virtex-E/-II has even > more. Unless they have to be independent of the FPGA, of course. Homann -- Magnus Homann, M.Sc. CS & E d0asta@dtek.chalmers.seArticle: 40835
"Falk Brunner" <Falk.Brunner@gmx.de> writes: > "Magnus Homann" <d0asta@mis.dtek.chalmers.se> schrieb im Newsbeitrag > news:ltg03190da.fsf@mis.dtek.chalmers.se... > > rickman <spamgoeshere4@yahoo.com> writes: > > > > > I need to plan a high speed bus that will connect 5 devices. They will > > > all be very closely spaced so that the lengths of the routes can be kept > > > pretty short. The clock line is the one I am most concerned about. It > > > is 100 MHz ECLKOUT from a TI C6711 DSP. The five devices are an SBSRAM, > > > two SDRAMs (16 bits each for 32 bit memory) and an XC2S200E. > > > > > > Is this differential? In that case I would go for daisychaining and > > termination at the end. SHORT stubs at intermediate devices. > > Isnt end termination the ONLY clean way when daisy-chaining??? (According to > the "bible" from Howard Johnson) Of course. Was I unclear? Homann -- Magnus Homann, M.Sc. CS & E d0asta@dtek.chalmers.seArticle: 40836
Kevin Brace <ihatespam99kevinbraceusenet@ihatespam99hotmail.com> wrote: > Yes, if I wanted the OE FFs to be merged into IOBs, I will manually > duplicate the OE FFs myself in my design (I used to do that.), but the > design decision I made needs only one OE FF for AD[31:0], and one OE FF > for C/BE#[3:0]. [...] > I don't appreciate XST overriding the design trade off I made in the > design, and do I have a way to prevent XST from duplicating the OE FF? > I am using ISE WebPACK 4.1WP3.0's XST (XST E.33), and Spartan-II XC2S150 > is the target device. > I feel like this OE FF duplication thing should not happen, and hope > that the future version of XST will give its users an option to disable > OE FF duplication if the user doesn't want it. That's life with the synthesis tools - they make life easy when you want to do the obvious thing, and difficult when you want to do something different. Current versions of Synplify do the same thing - duplicate the output enable FFs. Earlier versions (6.0.x, 6.1.x) didn't, and in that case it wasn't easy to manually duplicate them. It would combine your manually duplicated signals on you. So, XST thinks it is being helpful by duplicating the OE FFs. Generally speaking I think that is pretty helpful. I can't think why you wouldn't want it. The only thing you get by not using the OE FFs is longer delays from tristate to driven or the reverse. If that's what you're trying to do, you're depending on the minimum delays in the chip - generally not a good idea, and in a lot of cases those delays aren't even published. Hamish -- Hamish Moffatt VK3SB <hamish@debian.org> <hamish@cloud.net.au>Article: 40837
"Magnus Homann" <d0asta@mis.dtek.chalmers.se> schrieb im Newsbeitrag news:ltelik3fu0.fsf@mis.dtek.chalmers.se... > > > Is this differential? In that case I would go for daisychaining and > > > termination at the end. SHORT stubs at intermediate devices. > > > > Isnt end termination the ONLY clean way when daisy-chaining??? (According to > > the "bible" from Howard Johnson) > > Of course. Was I unclear? Hmm, not at all. It sounded a little bit like, "there is also another solution, but I prefer this one" Anyway, understood. -- MfG FalkArticle: 40838
Hi All, I am using the Aldec schematic editor from Xilinx Foundation 4.1i as my main design environment. I have recently started doccumenting what I am designing (oh joy ;-) and I am embeding parts of the schematics in the doccument. Currently this is being done with screenshots, which eats up rather a lot of space and is not conductive to very good print quality. What I would like to do is export the schematic to a standard (eg WMF) vector format, so I can easilly embed it in documents, and edit it as a vector file. I would have thaught this would be a simple option to add to SC programs, but... The nearest I have found to a solution so far is using a demo version of SVGMaker (sort of like Acrobat Distiller, except it produces SVG vector files) to produce .svg files. I figure I could then read this file in with some language and convert it into a metafile. This should be quite simple, as I have libraries that abstract away the metafile format to a graphics device context, but it all seems like a -=big=- cludge. Does anybody know of a simpler way to do this? Regards, Chris SaunterArticle: 40839
"Markus Meng" <meng.engineering@bluewin.ch> schrieb im Newsbeitrag news:aaaee51b.0203160206.2c7a77c7@posting.google.com... > > Spartan-II(E) are much easier to use, since the configuration is much > > simplified. > > Hi Falk, > > Using the XC2S300E for example ... > > I'am shure you know that the ISP configuration PROM from Xilinx does cost > ~ 20 US$ in small quantities. Now I'am working on a solution for 2$! in > small quantities. That's why I want to know ... Hmm, ok. > I got a feedback from a Xilinx support engineer, that explained me that I need > to adjust the total length count when I add '1' at the beginning. Is that your > experience as well? ??? AFAIK the Spartan-II/Virtex dont have a length count anymore. Just a s I said in my other posting, a configured Spartan-II/Virtex ignores CCLK and DIN (which is even a user IO after configuration), so its easy to share some signals, you only need a seperate PROGAM pin. -- MfG FalkArticle: 40840
Hi Dan, My suggestion is to do this on a v small test design - can save loads of heartache when migrating to the full design. Try it on a single signal first and then grow it to a bus (if that's what you want). Often the tool can do it for a single bit but gets confused on multiples. Open up the chip editor, look at the IOB structure & draw exactly that on your schem - then cut out the bits you don't need - it'll make sure you're not trying to use bits that don't exist. Also, does your ts stimulus come from internal logic or external - if external then the tool may get confused where to put that flip-flop. Like Rick said, there will be loads on this in Google groups - prob targetted at VHDL but I think you'll find parallels. Dave -- dmacArticle: 40841
"rickman" <spamgoeshere4@yahoo.com> wrote in message news:3C9237E8.8D818B3A@yahoo.com... > Magnus Homann wrote: > > > > rickman <spamgoeshere4@yahoo.com> writes: > > > > > I need to plan a high speed bus that will connect 5 devices. They will > > > all be very closely spaced so that the lengths of the routes can be kept > > > pretty short. The clock line is the one I am most concerned about. It > > > is 100 MHz ECLKOUT from a TI C6711 DSP. The five devices are an SBSRAM, > > > two SDRAMs (16 bits each for 32 bit memory) and an XC2S200E. > > > > Is this differential? In that case I would go for daisychaining and > > termination at the end. SHORT stubs at intermediate devices. > > > > Homann > > No this is not differential. This is LVTTL. BTW, what do you mean by > SHORT? Is that anything like telling someone to pay CAREFULL attention > to signal routing? :) Presumably 'short relative to 1/4 the wavelength', where the wavelength is that of the highest harmonic you need to maintain the integrity of the signal. You don't want the stub acting as a circuit component, as with transmission lines and microwave circuitry. LeonArticle: 40842
Rick Filipkiewicz wrote: > > o The first question has to be - have I been given a load of BS about > WinNT not running on the XP ? Probably ;-) > Under NT the `CPU' and `real' times are never more than a minute or so > apart on a quiet machine, even with PAR set to low priority. The > difference being accounted for, I'd guess, by file IO. You're using palain w2000 or already sp2 ? > Any ideas what the %^$£& Win2K was doing with those 20 min ? Calling billy for help ? ;-) > o When I run ModelSim I keep most of the project stuff, scripts, etc on > a Unix box and export them to the PC via Samba. Under WinNT it made very > little difference when I did `Open Project' where the .mrp project file > was. Trying this last night under Win2K and it took ~10min just to load > the project. I hate to think what's going to happen when it tries > loading an SDF. Check the version of samba you're using. The versions before 2.1 (IIRC) had big problems with w2000. (mostly performance) Actual version of samba is 2.2.3a (from february ?) cheers, P.S. Sitting in the same boat. Downgraded my system from nt4sp6a to w2000sp2, but on a P4. As at your place, all my design files are on a box I trust.Article: 40843
FWIW: WinNT had a meager TCP/IP stack, and terrible memory management. Attempts were made to fix those in Win2K. Unfortunately, that just made matters worse. The few good features of the IP stack were removed, and the memory management was made worse. (Sort of on-topic for the group, as a lot of people run their tools under 'doze.) I can't imagine that WinNT-SP6a has any problems with an XP. The SMP features might not work, but a single processor should. I, personally, would kick and scream to get my stable OS back. I've spent a fair amount of time doing CAD tools administration, and swapping out an OS in the middle of a project is a hanging offense. SpamHater7 On Sat, 16 Mar 2002 10:09:55 +0000, Rick Filipkiewicz <rick@algor.co.uk> wrote: >I've just had my main box in the office upgraded to 1.6GHz DDR Athlon XP >(Yippee, I've had an ordinary DDR Athlon at home for a long time) but, >according to the people who make up our PCs WinNT doesn't run reliably >on the Athlon XP. As a result I've been forced, against my will, to >abandon my tried & trusted WinNT-SP6A, which as been bomb & bullet-proof >reliable for a long time and downgrade to Win2K. I've found some strange >problems & I'm wondering if anyone can help: > >o The first question has to be - have I been given a load of BS about >WinNT not running on the XP ? > >o I ran a PAR with no other things happening on the box. Looking at the >post PAR report I can see that it took 31 minutes of CPU time for a >quick 4 iteration route (non XP home Athlon =~ 36) **but** the `real' >time as reported by PAR was 51 min. > >Under NT the `CPU' and `real' times are never more than a minute or so >apart on a quiet machine, even with PAR set to low priority. The >difference being accounted for, I'd guess, by file IO. > >Any ideas what the %^$£& Win2K was doing with those 20 min ? > >o When I run ModelSim I keep most of the project stuff, scripts, etc on >a Unix box and export them to the PC via Samba. Under WinNT it made very >little difference when I did `Open Project' where the .mrp project file >was. Trying this last night under Win2K and it took ~10min just to load >the project. I hate to think what's going to happen when it tries >loading an SDF. > >BTW Does anyone other than my find the Windows `My Computer' and `My >Network Places' stuff insufferably twee ? > >Article: 40844
I think you can find a summary of what was packed and what was not by looking towards the tail end of the mapper report (whatever.mrp). For each IOB/Pin, it lists the use of the INFF, OUTFF, or ENBFF. Eric Rick Filipkiewicz wrote: > Now the hard part: Finding the ones that should have been packed but weren't > ...Article: 40845
Hi, I have a design with 6 clocks, all below 20MHz, so can't use the DLL. However, redesigning to use only 5 clocks, 4 very slow and one at 40 MHz. The 40MHz goes to a DLL and the other 4 to various inputs. 2 of these go to global clock pins, and 2 not. The 40M should go to a global clock pin. However, upon synth (leonardo) only the 4 slow clocks show up in the clock constraints tab, the 40MHz is only listed under the input tab! I tried to force the 40M to global pin by firstly putting it in the VHDL code (BUFGP) and then, when that failed, by telling leonardo to use BUFGP pad. Still failed. These failures are occurring in PAR (Xilinx 3.1), it does seem to synth OK, but not as req.. Anyone got an insight how to sort this out? The old design with six clocks worked just fine, 4 to global clk pins and 2 (very slow) to normal IO pins. TIA, NivArticle: 40846
I have a Spartan 2 that I wish to use for an 8051 core. I need to know how to instantiate the RAM & ROM elements for this device. Any help will be appreciated. I believe I should be able to utilize the internal resources of this device without having to use external memories... I guess I'm not looking in the right area of the Xilinx site.Article: 40847
its a little rough around the edges in appearance (sorry mr. cohen) but its worth its weight in gold in terms of the lessons that can be learned from it. i had it for one day and i already learned about five new things, and i thought i had been in the industry long enough to consider myself a semi-professional. there are all sorts of tips and tricks that can be used to combat well-known design issues (i never thought of using a flancter before) and it goes through the real way to write verification suites (instead of paying lip-service to some simple testbenches). overall, i am very happy with it. some issues i have with it are: leans a bit more towards vhdl than verilog, although he obviously tries to balance the two languages, synplify pro schematics are sometimes a bit tough to follow, and in some cases it might be slightly better to abbreviate the code listings instead of allowing it to take up multiple pages. but like i said, in the true spirit of independent publishing, its rough around the edges, but definitely written by an expert. i will definitely recommended the people at work to pick up a copy. the only thing mr. cohen needs to worry about is that he might not get as much consulting work since he already present many solutions to common problems in his book. i just wanted to give everyone a heads up on my experience with this book. if you want to learn more about chip/fpga design and verification, at least the way it is done by professionals, i recommend this book. or if you want to support good independent publishing and encourage more experts to publish books where they can actually make money and provide quality instruction (are you listening mr. andraka?), then i would also recommend this book. this is just my two cents. i hope other people can present their reviews of this or other industry books also. strut911Article: 40848
strut911 wrote: > i just wanted to give everyone a heads up on my experience with this > book. if you want to learn more about chip/fpga design and > verification, at least the way it is done by professionals, i > recommend this book. I would find this review much more credible if Mr. Strut911 from hotmail.com would identify himself. -- Mike TreselerArticle: 40849
hamish@cloud.net.au wrote: > > > That's life with the synthesis tools - they make life easy when you > want to do the obvious thing, and difficult when you want to do something > different. Yes, synthesis tool makes the design process easier for a state machine design compared to using schematics I think, but when I start to do something different than most people will do like using only one OE FF for 32 pins, I guess that's when things go wrong. > Current versions of Synplify do the same thing - duplicate > the output enable FFs. Earlier versions (6.0.x, 6.1.x) didn't, and > in that case it wasn't easy to manually duplicate them. It would > combine your manually duplicated signals on you. > > So, XST thinks it is being helpful by duplicating the OE FFs. Generally > speaking I think that is pretty helpful. I can't think why you wouldn't > want it. > I guess the XST is not the only synthesis tool that has this problem. Yes, I also think XST thinks that it is being helpful by noticing that I use only one OE FF, and automatically duplicating it for me so that all output and OE FFs will be merged into IOBs, and thus making Tco, Ton, and Toff faster, but I only want Tco to be fast, and I don't care about Ton or Toff. For Ton and Toff, reducing Tsu for the OE FF is more important. To tell you the truth, for 33MHz PCI, since Tsu is < 7ns, getting an OE FF automatically duplicated is not a big problem for meeting Tsu, but I realized that this OE FF duplication is a big problem if I wanted my PCI IP core to have even a small chance to meet 66MHz PCI's Tsu < 3ns. Mine doesn't have to meet 66MHz PCI's Tsu < 3ns, but it will be nicer if it had some chance. > The only thing you get by not using the OE FFs is longer delays from > tristate to driven or the reverse. If that's what you're trying to do, > you're depending on the minimum delays in the chip - generally not > a good idea, and in a lot of cases those delays aren't even published. > > Hamish > -- > Hamish Moffatt VK3SB <hamish@debian.org> <hamish@cloud.net.au> The reason why I am using only one OE FF which most people will prefer duplicating OE FF in the code, or getting it automatically duplicated is because I use a thing called Address/Data Stepping in my design. Address/Data Stepping is a technique in PCI for slower devices to turn on their AD[31:0], C/BE#[3:0], and PAR tri-state buffer slowly to reduce switching noise (I know virtually nothing about signal integrity issues, but this is what I read somewhere.), but can also be used to turn on those tri-state buffers one cycle later if the device cannot turn it on within Tval < 11ns. Xilinx and Altera both use this technique in their PCI IP cores, so I figured, why shouldn't I? In my current design, a path from control signals (FRAME#, IRDY#, DEVSEL#, TRDY#, STOP#) to AD or C/BE# OE FF has about 4 levels of 4-input LUT which is not nice at all (I wish it was lower like 2 or 3 levels, but I have got a lot of inputs.), plus the routing distance is pretty long. Things can only get worst in 64-bit PCI, and I think will be near impossible to meet Tsu < 3ns in 66MHz PCI even if more Tsu is artificially created by putting delay on the global clock buffer. So, I think Xilinx decided to make a trade-off, and put an OE FF near the control signals to meet Tsu < 3ns, and have the thing turned on by the next cycle. Fortunately in PCI, Ton is infinite (I will turn it on within 30ns (15ns in 66MHz PCI) though.), and Toff is 28ns (14ns for 66MHz PCI), so after the control signals are latched into the OE FF, I have plenty of time to turn the tri-state buffers on or off. The only downside to doing this is GNT# has to be asserted for at least two cycles to do a busmaster transfer, so that can reduce performance somewhat in theory. Kevin Brace (Typically, don't respond to me directly, and respond within the newsgroup.)
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z