Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Hi everyone, I'm trying to implement a direct memory access for my custom peripheral on a Virtex 2 Pro 30. It seems difficult to me to find a starting point for this project, as I cannot find any Papers, Tutorials for that from Xilinx or anybody else. I've looked at the memory controllers which a shipped with the Platform Studio, but they seem to be rather complicated to use and the communication seems to work via one of the Buses OPB or PLB. At opencores.org I found a DDR SDRAM Controller written in VHDL. The Procotol seems to be easy and I can easily combine this Controller with my own peripheral design. But how to connect the Controller to the DDR-SDRAM Module on my Board? Do I need to use again one of these buses? If possible, I want to avoid that because these buses are designed for communication with many masters/slaves and that slows down the throughput. But then there's another question. How do I import the peripheral into Platform Studio when not using one of those buses? Hope you can help me with that and perhaps give me some hints where to start. Thanks and Regards, PeterArticle: 111026
Hi all. Now I'm using EDK 8.2.01i and I want to generate again a project for my software development. For my goal is important use SPI Flash, but NOW with the new EDK version (before I used EDK 8.1) for SPARTAN 3E Starter Kit Board no more is possible use SPI flash !!! I saw effectively problems, that is necessary set bitgen configuration using -g unusedpin to pullnone... Perhaps Xilinx find trouble and bugs and so SPI flash has been disabled ? Thank in advance for any answers. Al.Article: 111027
Alfmyk schrieb: > Hi all. > > Now I'm using EDK 8.2.01i and I want to generate again a project for my software development. For my goal is important use SPI Flash, but NOW with the new EDK version (before I used EDK 8.1) for SPARTAN 3E Starter Kit Board no more is possible use SPI flash !!! I saw effectively problems, that is necessary set bitgen configuration using -g unusedpin to pullnone... Perhaps Xilinx find trouble and bugs and so SPI flash has been disabled ? > > Thank in advance for any answers. > > Al. SPI is still in the XBD for 8.2 but what type of problem did you have? if the BSB cant add SPI peripheral add it manually or copy from old design AnttiArticle: 111028
Peter Kampmann schrieb: > Hi everyone, > > I'm trying to implement a direct memory access for my custom peripheral > on a Virtex 2 Pro 30. > It seems difficult to me to find a starting point for this project, as > I cannot find any Papers, Tutorials for that from Xilinx or anybody choice 1 select MCH_DDR controller connect your peripheral to XCL port of the MCH_ controller its really simple interface, if the OPB doesnt access the memory there will be no arbitration and your peripheral has exclusive access anttiArticle: 111029
I want connect SFP module to the XC2VP7 (use RocketIO in custom mode in 2488 Mbps, ie STM-4). But I anywhere find example of circuit. Are they connected in AC-coupled mode only or I can connect them in DC-coupled? In data sheet for SFP-modules I that the internal capacitors 0,1 =B5F are connected sequental RX and TX data lines. Were I can see examles or read about it? ThankArticle: 111030
Hi Antti. Yes, you are right: I could add SPI a part without BSB or use an older design... Anyway we found a conflict contention on SPI Bus caused by Intel Strata Flash. It happen also without add Intel Flash itself on design. So I supposed Xilinx has disabled SPI adding on BSB due to this problem... I suppose... But I/m not sure. Anyway have no sense I have to add to bitgen configuration (bitgen.ut) this line: -g UnusedPin:Pullnone to get a good SPI flash functioning. Cheers, Al.Article: 111031
Alfmyk schrieb: > Hi Antti. > > Yes, you are right: I could add SPI a part without BSB or use an older design... Anyway we found a conflict contention on SPI Bus caused by Intel Strata Flash. It happen also without add Intel Flash itself on design. > > So I supposed Xilinx has disabled SPI adding on BSB due to this problem... I suppose... But I/m not sure. > > Anyway have no sense I have to add to bitgen configuration (bitgen.ut) this line: > > -g UnusedPin:Pullnone > > to get a good SPI flash functioning. > > Cheers, > > Al. ah, there is no contention if proper design is used! you have to have PULLUP on unused pin do DESELECT the parallel flash, default is PULLDOWN, so the parallel flash gets selected even when not needed if you arent using the parallel flash then add external ports to disable it wire them to net_vcc and verify the UCF that should take care of it anttiArticle: 111032
Apologies for the (slightly) commercial nature of this post. Those of you with long memories may recall coding devices in ABEL. One nice thing about ABEL was that you could write very simple vector files to simulate your device, where a vector was something like [C,1,0,1,etc] -> [1,1,0,1,HHHH,01AB,etc] ie. setup some inputs, apply a clock pulse, and check some outputs. I've written some software over the years that lets me do the same thing in VHDL, with various extensions, and I use it to test most of my RTL code. It's simple, you don't need to write or know *any* VHDL to use it, and it gives you a pass/fail very quickly, for a module or a whole device. I'm thinking about brushing this up a bit, adding Verilog support, and flogging it for maybe 100 - 300 USD a go. To use it, you obviously still need a simulator - the software currently produces VHDL-only output, and uses your simulator to simulate your chip using the auto-generated verification code. This brings me to my problem. I can make the software a lot more sophisticated if I can generate C code, as well as (or instead of) the VHDL or Verilog. There are some testbenchy things which are just very difficult to do in pure VHDL or Verilog. *But*, most of the potential users of this software will be FPGA coders with a cheap simulator that doesn't support a C-language interface (ModelSim PE/VHDL on Windows, for example, doesn't do this, and presumably the FPGA-specific simulators don't do this either). What I'd really like to find out, if you can spare the time and this might be of interest to you, is: * What simulator do you use? * Is your RTL code in Verilog/VHDL/both? * Does your simulator have a C-language interface? From Verilog, or VHDL, or both? * If your simulator doesn't support C, would you be willing to upgrade it to use a product of this sort? Or would you prefer to get pure VHDL or Verilog out of this software, even if it means reduced vector file functionality? As a bonus, if you add the line "this is a great idea and I claim my 50% discount", then you can have 50% off the (initial) purchase price, if I ever get around to doing this. You can reply here or directly to me at 'unet+50' 'at' 'riverside-machines' 'dot' 'com'. Thanks - EvanArticle: 111033
"Evan Lavelle" <eml@nospam.uk> wrote in message news:8r24k25usdp7g5p6k3outelti47fu2enmh@4ax.com... .. > This brings me to my problem. I can make the software a lot more > sophisticated if I can generate C code, as well as (or instead of) the > VHDL or Verilog. What about supporting Tcl, most if not all simulators support this language. You can force and examine signal using Modelsim's Tcl. > There are some testbenchy things which are just very > difficult to do in pure VHDL or Verilog. Such as? > *But*, most of the potential > users of this software will be FPGA coders with a cheap simulator that > doesn't support a C-language interface (ModelSim PE/VHDL on Windows, > for example, doesn't do this, and presumably the FPGA-specific > simulators don't do this either). You can get a SystemC license for Modelsim PE which IMHO is much much easier to use than the FLI/PLI. > > What I'd really like to find out, if you can spare the time and this > might be of interest to you, is: > > * What simulator do you use? Modelsim SE. > > * Is your RTL code in Verilog/VHDL/both? VHDL. > > * Does your simulator have a C-language interface? From Verilog, or > VHDL, or both? Yes, SE comes with the FLI which I have used in the past until I discovered SystemC :-) > > * If your simulator doesn't support C, would you be willing to upgrade > it to use a product of this sort? No since I don't understand the advantages of your product, also changing PE to SE or adding a SystemC license to PE is not particular cheap and it might be more cost effective to spend some more time on your testbench :-) Hans www.ht-lab.comArticle: 111034
Hello Peter, I'm taking advantage of you responding: What's the proper procedure to feed back documentation typos to Xilinx ? Back to the original post, i'm now able to be more precise on one of the above errors: * ug191 v1.2, table 6-7 page 103, the block type 001 is block RAM content and BRAM routing is actually included in standard, type-000 frames. type 010 should be scrapped as bram type, as now seems obvious from re-reading the table. I'm also a bit puzzled about documentation inconsistencies between device generations, for instance from virtex4 to virtex5. The virtex4's configuration frames contain the exact same 12 bit Hamming ECC code, at the very same position as described in the ... virtex5 documentation. If Xilinx chose not to make the information public for the v4, how come it's now cleartext in the v5 documentation (not that i'm complaining...) ? Why not at least a pointer in the v4 doc to direct people to the v5 doc ? JBArticle: 111035
Hi all. I have seen this instruction in uBlaze interface header: void microblaze_update_icache (int , int , int ) void microblaze_update_dcache (int , int , int ) I don't sure.. When I have to use it? Every Time I want to empty and update cache ??? What are the three input parameters ? Thanks ! Al.Article: 111036
Hi all. At the moment I never implement a ISR on uBlaze system. I have seen, I know I have to use the INTC IP driver routines, but I have seen also there are some commands in uBlaze interface Header. For example: microblaze_enable_interrupts(void); microblaze_disable_interrupts(void); I imagine before of all I HAVE to use microblaze_enable_interrupts(void); that it has priority and works directly on uBlaze CPU. After it I should instance INTC structure/object, initialize it, enable it. After I have to connect my ISR with proper instruction but I don't know very well how to do it... Please someone could write me a list of steps (in order) and if possible write a little example? Thanks in advance. Al.Article: 111037
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Evan Lavelle wrote: > This brings me to my problem. I can make the software a lot more > sophisticated if I can generate C code, as well as (or instead of) the > VHDL or Verilog. There are some testbenchy things which are just very > difficult to do in pure VHDL or Verilog. *But*, most of the potential > users of this software will be FPGA coders with a cheap simulator that > doesn't support a C-language interface (ModelSim PE/VHDL on Windows, > for example, doesn't do this, and presumably the FPGA-specific > simulators don't do this either). Cheap/free simulators *should* support some sort of C interface. Icarus Verilog supports VPI and I'd be astonished if the bundled Verilog compilers w/ FPGA vendor tools doesn't also support VPI. - -- Steve Williams "The woods are lovely, dark and deep. steve at icarus.com But I have promises to keep, http://www.icarus.com and lines to code before I sleep, http://www.picturel.com And lines to code before I sleep." -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2 (GNU/Linux) Comment: Using GnuPG with SUSE - http://enigmail.mozdev.org iD8DBQFFQjburPt1Sc2b3ikRAlWSAJ4z5s8Hn9mQ4B0U8IevKZzmOsCkDQCg500z EziQS9sPImvMVTE3FP0T3f0= =tv0x -----END PGP SIGNATURE-----Article: 111038
"Alfmyk" <alfmyk@hotmail.com> schrieb im Newsbeitrag news:ee9fdd4.-1@webx.sUN8CHnE... > Hi all. > > At the moment I never implement a ISR on uBlaze system. > > I have seen, I know I have to use the INTC IP driver routines, but I have > seen also there are some commands in uBlaze interface Header. > > For example: microblaze_enable_interrupts(void); > microblaze_disable_interrupts(void); > > I imagine before of all I HAVE to use microblaze_enable_interrupts(void); > that it has priority and works directly on uBlaze CPU. > > After it I should instance INTC structure/object, initialize it, enable > it. > > After I have to connect my ISR with proper instruction but I don't know > very well how to do it... Please someone could write me a list of steps > (in order) and if possible write a little example? > > Thanks in advance. > > Al. step by step Optional Step 0: learn to read Step 1: browse in EDK\ and open some example that uses INTC Step 2: read and study the example Step 3: implement your code base on the example AnttiArticle: 111039
mkaras wrote: > Elmo Fuchs wrote: > >>Hi all, >> >>I'm currently developing a PCB featuring a Xilinx Virtex-4 device. Unlinke >>the Virtex-II series they now offer the possibility to route various clock >>signals to several domains on the FPGA and select them locally by specific >>clock multiplexer inputs. >>Because of the restricted amount of available pins on the device I selected >>(Virtex-4 FX40 with 352 user I/Os) I would like to use just one clock input >>on each side of the FPGA, thereby saving clock multiplexer inputs which I >>can use as normal GPIOs, and use an external clock multiplexer instead for >>my 3 clocks. >>Has anyone made experience with such or similar solution? Has anyone used an >>external clock multiplexer device for frequencies up to 500 MHz, yet? Is >>there any recommendation which chip I could use for this application in >>terms of jitter, etc.? And by the way... is my approach advisable, at all? >> >>Any comments are appreciated. >> >>Regards Elmo > > > What I find amazing is that you feel the need to sacrifice a chip's > normal clock architecture just for the sake of gaining a few more I/O > pins. If you are so tight on your design fit that you are approaching > nearly 100% I/O utilization on the FPGA then you better begin looking > at another approach. > > First off let me describe the issues of designing to the limit of a > part. It is always wise to reserve some number of I/O pins on a design. > These are needed for several very important reasons. One is the need to > allow for some additional expansion in case you discover the need for > bring a bit more of the outside world in or to bring out some controls > for other devices or logic. With new designs it is very often that even > with the best design intentions one can end up overlooking the need for > additional stimulous going in or status coming out. Second there the > the huge benefit of having some extra I/Os on test points that you can > temporarily connect into internal parts of the FPGA circuit to support > the debug and design validation process. Big FPGAs with lots of complex > embedded circuitry can be a challenge to debug and the visibility pins > will be a godsend if and when you need them. Lastly there is a common > habit formed by FPGAs that once you lock the pins and commit to the PC > board connectivity subsequent changing of the internal logic definition > in a big way can sometimes make it next to impossible to keep the same > pin allocation. A few spare pins on each side or within each I/O block > of the chip can often allow a re-fit to work if you free one or two > I/Os from their previously locked pins. > > When considering how to move away from a design that is targeting > nearly 100% utilization you can consider a number of approaches. Look > at partitioning the design in a manner that you could put it into pair > of smaller devices. On the other hand there is also the possibility to > move to a larger FPGA device as well. A third thing to look at is if a > considerable number of I/O pins can be saved by using some simple fixed > logic devices at the periphery of the chip. A simple example is the > parallel to serial conversion strategy that could be implemented with > cheap shift registers to support many slow GPIOs with a few I/O pins at > the FPGA. It is relatively easy to design FPGA logic that can free run > a shift in or shift out process to keep the external GPIO states in > synch with internal nodes with a modest latency. > > Good Luck > - mkaras > Regardless of the issue of utilising 100% of the IO pins, the onboard clock multiplexers do a lot of the heavy lifting for you. I would suggest multiplexing ordinary IO rather than the clocks. On using 100% of IO - in a practical design where space (and cost) are at a premium, I will use a device that has *what I need* and no more. Now I know that means it's tough to reconfigure and add signals, to say nothing of looking at the internal state by toggling lines appropriately (although that can be done if you're sneaky enough about it), but in a shipping design I need to look at cost - and IO pins (because they enlarge the package) are a very high cost on an FPGA. I have a design in development right now where I am at the limit of IO pins on an FPGA and I am not going to add a separate device (except perhaps a single SPI IO device - cheap enough, but there are other issues), nor move to a larger FPGA (because I have to move to a larger core to get more IO in this particular family). Using 2 smaller devices may or may not be appropriate - if you need _lots_ of IO and a small amount of logic it might work, but there's still power to be run, interfaces to be set up etc., and then when adding the cost of configuration devices (and you have to put those down if the resources in the FPGA are required for a processor at boot time) it's easily possible to exceed the cost of a large FPGA with two small ones, to say nothing of footprints (config devices are in god-awfully large packages). So it's not an easy question to answer, but there _are_ times it's perfectly reasonable to have used 100% of FPGA IO pins. Cheers PeteSArticle: 111040
Chris wrote: > > For Flash write back both options are available -- PERSISTENT=ON for > > sysConfig port and through JTAG. > > Yes that is what I thought too, but what good is it since the local FAE told > me there is no spare flash space. If there is spare flash area for user > data, please let me know. The datasheet gives little info on this. sorry i'm not that technical, but i think for applications that would require less than 1,000 writes, you could use the flash. here's how: the JTAG verify command reads out the flash data without changing the configuration in the SRAM. you just need to use JTAG to write to the flash for the user data you need and use the JTAG verify command to get the data back out. not that useful, since you want to turn off the part and reconfigure? well, i think you should be able to write some data (let's say version 1, version 2, version 3, etc...) into the flash that wouldn't alter the configuration of the FPGA except for a few memory bits. you would just need to figure out which configuration bits to change in the flash to give you your ROM in the SRAM array. that way, each time you turned the part on, you would get the new ROM information along with your regular FPGA configuration. if your application doesn't require using the flash more than 1,000 times, i would suggest you talk to Lattice technical support. hope this helps. rgds, bartArticle: 111041
Evan Lavelle wrote: > Apologies for the (slightly) commercial nature of this post. > > Those of you with long memories may recall coding devices in ABEL. One > nice thing about ABEL was that you could write very simple vector > files to simulate your device, where a vector was something like > > [C,1,0,1,etc] -> [1,1,0,1,HHHH,01AB,etc] > > ie. setup some inputs, apply a clock pulse, and check some outputs. > I've written some software over the years that lets me do the same > thing in VHDL, with various extensions, and I use it to test most of > my RTL code. It's simple, you don't need to write or know *any* VHDL > to use it, and it gives you a pass/fail very quickly, for a module or > a whole device. So this requires you generate all outputs ? If the sim does fail, what format is the failure report, and how does the user relate that back to a given line of test source code ? > > I'm thinking about brushing this up a bit, adding Verilog support, and > flogging it for maybe 100 - 300 USD a go. To use it, you obviously > still need a simulator - the software currently produces VHDL-only > output, and uses your simulator to simulate your chip using the > auto-generated verification code. > > This brings me to my problem. I can make the software a lot more > sophisticated if I can generate C code, as well as (or instead of) the > VHDL or Verilog. There are some testbenchy things which are just very > difficult to do in pure VHDL or Verilog. *But*, most of the potential > users of this software will be FPGA coders with a cheap simulator that > doesn't support a C-language interface (ModelSim PE/VHDL on Windows, > for example, doesn't do this, and presumably the FPGA-specific > simulators don't do this either). > > What I'd really like to find out, if you can spare the time and this > might be of interest to you, is: Interesting idea. What about doing a simple VHDL/Verilog/Demo-C version that is free, and a smarter version that is $$ ? A problem with this type of tool, is explaining to each user how its feature set can offer more to a given task, so the free/simple versions ( and good examples) do that. For those who want it, I think most? FPGA vendors have (free) waveform entry for simulation entry, so that is one competition point. There is also this work by Jan Decaluwe, that uses Python for both verilog generation, and testbench generation. http://myhdl.jandecaluwe.com/doku.php/start http://myhdl.jandecaluwe.com/doku.php/cookbook:stopwatch -jgArticle: 111042
Hi all: I'd like to draw your attention to an interesting project about an FPGA-based music synthesizer. A remarkable feature is that the designer (George Pantazopoulos) is using Python for all front-end development work. In particular, all FPGA hardware is designed using MyHDL, a Python-based hardware description language. This project demonstrates that this technology is real and practical. The project page, including photos and synthesized sound samples, is here: http://myhdl.jandecaluwe.com/doku.php/projects:phoenixsid_65x81 Best regards, Jan -- Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com Losbergenlaan 16, B-3010 Leuven, Belgium From Python to silicon: http://myhdl.jandecaluwe.comArticle: 111043
I changed the conf ratio to "00001011", bypassed the WB_interface and it started working. The ACLK and LRCLK seemed to be what I expected. However, the SCLK was not. I used a FIFO to latch the data and read it out based on ACLK. Thanks. Cbr cbr_929rr wrote: > According to the manual, > The sample rate is given by bit_rate/(RES * 2) > > If I choose the sample rate to be 48 KHz, and the resolution is > 20-bits, > the bit rate would be 1.92 Mbps. > > However, the RATIO would then be calculated as follows. > > RATIO= (wishbone_bus_clock - (sample_rate * RES * 8)) / > (sample_rate * RES * 4) > > wishbone_bus_clock= 50 MHz > sample rate = 48 KHz > RES=20 bits > > which would then be equal to ~22 (10110) > This is not giving me the values of 48KHz for WS and 3.072 MHz for > i2s_sck. > > What am I missing? > I really need some help. > > Thanks, > cbr > > marc_ely wrote: > > If you look at page 6 of the PDF datasheet that comes with the core you > > will see two simple formulas for RES and RATIO. These two feature in > > the TX and RX conf registers, and are used to set sample rate etc. > > You need to ensure TX is in master mode too. > > > > Marc > > > > > I'm trying to use the Opencores I2S master logic as the driver for my > > > test platform. > > > > > > The core came configured with SCK = 2.77 MHz and WS(left/right clock)= > > > 69.44 KHz. > > > > > > I would like to be able to reconfigure the core to generate SCK=3.07MHz > > > and WS=48 KHz. > > > > > > I played around with the conf variable but could not get the ratio I'm > > > looking for. > > > > > > Could I get some help? > > > > > > ThanksArticle: 111044
Evan Lavelle <eml@nospam.uk> writes: > Apologies for the (slightly) commercial nature of this post. > > Those of you with long memories may recall coding devices in ABEL. One > nice thing about ABEL was that you could write very simple vector > files to simulate your device, where a vector was something like > > [C,1,0,1,etc] -> [1,1,0,1,HHHH,01AB,etc] > > ie. setup some inputs, apply a clock pulse, and check some outputs. > I've written some software over the years that lets me do the same > thing in VHDL, with various extensions, and I use it to test most of > my RTL code. It's simple, you don't need to write or know *any* VHDL > to use it, and it gives you a pass/fail very quickly, for a module or > a whole device. If you really mean such a simple format, my answer is: don't bother, that way is obsolete for anything but the most trivial designs. Given the amount of testing and the complexity of many interfaces/busses today, I need to write my test vectors at a higher level than applying a bitvector per clock cycle. In order to do that, I normally create a library of high-level functions in a scripting language like Perl or Tcl, which would do "the dirty work" for me. Normally I would make the script library interface directly to a VHDL/Verilog testbench through TEXTIO files. With your extention, I would still have to make the script library, but just have a "backend" to that emits files compatible with your format. All I'd save is to write my VHDL testbench for the interface (which may could have serious performance impacts, since you make the simulator stop for each clock cycle and is a real no-no for performance), and frankly, writing the VHDL testbench isn't the problem. > doesn't support a C-language interface (ModelSim PE/VHDL on Windows, > for example, doesn't do this, and presumably the FPGA-specific > simulators don't do this either). Affirmative (at least for the XE version). > * What simulator do you use? Modelsim PE / XE > * Is your RTL code in Verilog/VHDL/both? VHDL, but we have to deal with Verilog as well due to backannotated GL sims. > * If your simulator doesn't support C, would you be willing to upgrade > it to use a product of this sort? Considering that the list price of Modelsim SE is 170K DKK vs 70K DKK for a PE license (both floating licenses), not bloody likely. > Or would you prefer to get pure VHDL > or Verilog out of this software, even if it means reduced vector file > functionality? With a little imagination, you can do a surprising amount intelligent work with "dumb" TEXTIO vector files, by carefully dividing the work between script library (vector generation) text files, and the testbench itself. Kai -- Kai Harrekilde-Petersen <khp(at)harrekilde(dot)dk>Article: 111045
Jan Decaluwe wrote: (snip) > description language. This project demonstrates that this technology > is real and practical. Not really, I know a guy that built a SID clone using mostly 74xxx parts. I also know another guy that wrote his own schematic capture in order to design, thats right, a SID clone. Does this demonstrates that their technology was real and practical? That page has been there for a long while. And I am still waiting for the source code so I can form my own opinion. George, are you listening? burnsArticle: 111046
Here is the response from the responsible group within Xilinx: "For documentation such as data sheets, user guides, app notes, and similar documentation please use the "Helpful?" link under each doc: This will pop-up a simple form that can be filled out with feedback for the specific document which will then be funneled into the appropriate group. The customer does have to acknowledge that Xilinx will not respond directly to the feedback. If the feedback is on software documentation that is produced by DSD for ISE/EDK, you can send the feedback to isedocs@xilinx.com. I hope this helps, and it avoids the issues of registration and passwords and waits... Peter Alfke ================================ jbnote wrote: > Hello Peter, > > I'm taking advantage of you responding: > > What's the proper procedure to feed back documentation typos to Xilinx > ? > > Back to the original post, i'm now able to be more precise on one of > the above errors: > > * ug191 v1.2, table 6-7 page 103, the block type 001 is block RAM > content and BRAM routing is actually included in standard, type-000 > frames. type 010 should be scrapped as bram type, as now seems obvious > from re-reading the table. > > I'm also a bit puzzled about documentation inconsistencies between > device generations, for instance from virtex4 to virtex5. The virtex4's > configuration frames contain the exact same 12 bit Hamming ECC code, at > the very same position as described in the ... virtex5 documentation. > If Xilinx chose not to make the information public for the v4, how come > it's now cleartext in the v5 documentation (not that i'm > complaining...) ? Why not at least a pointer in the v4 doc to direct > people to the v5 doc ? > > JBArticle: 111047
To paraphrase Karl Marx: A spectre is haunting this newsgroup, the spectre of metastability. Whenever something works unreliably, metastability gets the blame. But the problem is usually elsewhere. Metastability causes a non-deterministic extra output delay, when a flip-flop's D input changes asynchronously, and happens to change within an extremely narrow capture window (a tiny fraction of a femtosecond !). This capture window is located at an unknown (and unstable) point somewhere within the set-up time window specified in the data sheet. The capture window is billions of times smaller than the specified set-up time window. The likelihood of a flip-flop going metastable is thus extremely small. The likelihood of a metastable delay longer than 3 ns is even less. As an example, a 1 MHz asynchronous signal, synchronized by a 100 MHz clock, causes an extra 3 ns delay statistically once every billion years. If the asynchronous event is at 10 MHz, the 3 ns delay occurs ten times more often, once every 100 million years. But a 2.5 ns delay happens a million times more often ! See the Xilinx application note XAPP094 You should worry about metastability only when the clock frequency is so high that a few ns of extra delay out of the synchronizer flip-flop might causes failure. The recommended standard procedure, double-synchronizing in two close-by flip-flops, solves those cases. Try to avoid clocking synchronizers at 500 MHz or more... So much about metastability. The real cause of problems is often the typical mistake, when a designer feeds an asynchronous input signal to more than one synchronizer flip-flop in parallel, (or an asynchronous byte to a register, without additional handshake) in the mistaken believe that all these flip-flops will synchronize the input on the same identical clock edge. This might work occasionally, but sooner or later subtle difference in routing delay or set-up times will make one flip-flop use one clock edge, and another flip-flop use the next clock edge. Depending on the specific design, this might cause a severe malfunction. Rule #1: Never feed an asynchronous input into more than one synchronizer flip-flop. Never ever. Peter AlfkeArticle: 111048
This is the official answer from our support organization: "The proper procedure for feeding back documentation typos to Xilinx (as well as for requesting technical support) is to use our great WebCase tool, which will help Xilinx to efficiently capture all relevant information about the issue up front. It can be accessed here: http://www.xilinx.com/support/clearexpress/websupport.htm . Other helpful technical support resources can also be found here: http://www.xilinx.com/support/techsup/tappinfo.htm On Oct 27, 9:23 am, "jbnote" <jbn...@gmail.com> wrote: > Hello Peter, > > I'm taking advantage of you responding: > > What's the proper procedure to feed back documentation typos to Xilinx > ? > > Back to the original post, i'm now able to be more precise on one of > the above errors: > > * ug191 v1.2, table 6-7 page 103, the block type 001 is block RAM > content and BRAM routing is actually included in standard, type-000 > frames. type 010 should be scrapped as bram type, as now seems obvious > from re-reading the table. > > I'm also a bit puzzled about documentation inconsistencies between > device generations, for instance from virtex4 to virtex5. The virtex4's > configuration frames contain the exact same 12 bit Hamming ECC code, at > the very same position as described in the ... virtex5 documentation. > If Xilinx chose not to make the information public for the v4, how come > it's now cleartext in the v5 documentation (not that i'm > complaining...) ? Why not at least a pointer in the v4 doc to direct > people to the v5 doc ? > > JBArticle: 111049
it is: i,s,e,d,o,c,s, at xilinx . com (minus all the commas and spaces) Sorry for the problem... Peter On Oct 27, 2:14 pm, "Peter Alfke" <p...@xilinx.com> wrote: > Here is the response from the responsible group within Xilinx: > > "For documentation such as data sheets, user guides, app notes, and > similar documentation please use the "Helpful?" link under each doc: > This will pop-up a simple form that can be filled out with feedback for > the specific document which will then be funneled into the appropriate > group. The customer does have to acknowledge that Xilinx will not > respond directly to the feedback. > > If the feedback is on software documentation that is produced by DSD > for ISE/EDK, you can send the feedback to > ised...@xilinx.com. > > I hope this helps, and it avoids the issues of registration and > passwords and waits... > Peter Alfke > > ================================ > > jbnote wrote: > > Hello Peter, > > > I'm taking advantage of you responding: > > > What's the proper procedure to feed back documentation typos to Xilinx > > ? > > > Back to the original post, i'm now able to be more precise on one of > > the above errors: > > > * ug191 v1.2, table 6-7 page 103, the block type 001 is block RAM > > content and BRAM routing is actually included in standard, type-000 > > frames. type 010 should be scrapped as bram type, as now seems obvious > > from re-reading the table. > > > I'm also a bit puzzled about documentation inconsistencies between > > device generations, for instance from virtex4 to virtex5. The virtex4's > > configuration frames contain the exact same 12 bit Hamming ECC code, at > > the very same position as described in the ... virtex5 documentation. > > If Xilinx chose not to make the information public for the v4, how come > > it's now cleartext in the v5 documentation (not that i'm > > complaining...) ? Why not at least a pointer in the v4 doc to direct > > people to the v5 doc ? > > > JB
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z