Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Falk Brunner wrote: > > "Russell Shaw" <rjshaw@iprimus.com.au> schrieb im Newsbeitrag > news:3C514E29.481AF4D8@iprimus.com.au... > > > Does the free xilinx webpack come with a technology viewer? > > (shows a schematic equivalent of your vhdl) > > AFAIK, no (I havnt found it) Ok, if you want to check that any latches have been generated without staring at each line of vhdl, how would you do that in the xilinx tools?Article: 38826
CLBs are more meaningful. A spartanII CLB is roughly equivalent to two 4K CLBs, as it contains two "slices", each of which has two LUTs, two flip-flops and a carry chain. Take a look at the data sheet. You'll wonder how you did without some of the features like the SRL16 and the block RAMs. The equivalent gates figure is a marketing number. The number includes a figure for the block RAM in the spartanII. For a given number of gates you get less LUTs but more memory. That trend continues with the VIrtexII also. Note also the gate counts are at the whim of the marketing department. To wit, what was a 4013 with 13K gates magically became an XCS30 with 30K gates even though the XCS30 is for all intents a cost reduced 4013, no new stuff in there. Count CLBs not gates, and season that count with the additional capabilities offered by the architecture.. Chatpapon Prasartsee wrote: > I used Xilinx XC4010E in the last project and now I am going to use > Spartan-II FPGA. I don't understand something about the number of CLB and > gates in the chip and how to compare between two chip families. > ----------------------------- > XC4010E > Logic cell = 950 > Gate range= 7,000 - 20,000 > Total CLB = 400 > ----------------------------- > Spartan-II XC2S30 > Logic cell = 972 > System Gates = 30,000 > Total CLBs = 216 > ----------------------------- > Obviously, the XC2S30 has more gates, but its CLBs is less than the XC4010E. > My question is: > I think that the CLB numbers are different because of the difference in the > architecture of these two chip families. Am I right? > Which one is bigger? What is the criteria to use when choosing Xilinx chips: > CLBs or number of gates? > Which FPGA in XC4000 family is equivalent to XCS30? > > Best Regards, > Chatpapon -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759Article: 38828
I missed that this was a combinatorial. Still, 10ps is no margin. You can eat up more than that with tolerances on the threshold voltages, not to mention signal integrity. We naturally have to trust the timing files, as there is nothing else to tell us the design will work. Experience has taught me to pad the numbers a bit though. John Handwork wrote: > Clock jitter is a serious issue in most systems and should never be > ignored. It looks like - in this case - the goal is the go from input > pads (csn, rdn) to asynchronous output data. No DLL involved. Not > here, at least. Are you concerned about the DSP clocking? > > I have this bad habit of trusting the timing files to a decent degree. > At least lab-type conditions make the timing numbers more reliable than > full temp/voltage operation. Ya think? > > - John_H > > Ray Andraka wrote: > > > > John_H wrote: > > > > > If you want to jump through some hoops *and* if you can guarantee that csn disables > > > before rdn (or vice-versa) I can give you latch tristates that should give you 6.3nS > > > and better at the 12mA drive in an xc2s100-5. If you want to increase to 24mA drive > > > you should be able to get this to 10s of picoseconds over the 6ns target as long as > > > your data pins are near the control pins. > > > > > > Interested? > > > Using Synplify? > > > Using Verilog? > > > Comfortable checking the timing as an exception (editing the pcf or running timing > > > analyzer with path tracing)? > > > > > > If you want to push the bounds you have to push the code. > > > > and trust the timing files, and have no clock jitter. 10ps is no margin, and with the > > typical jitter on the CLKDLL is probably actually short of the 6ns goal. > > > > > > > > > > > - John > > > > > > John_H wrote: > > > > > > > You're looking for input-logic-tristate_out times that are extremely low. > > > > Admirable. Tough. Can't get there with just a UCF constraint. > > > > > > > > I'm basing the info below on recent Spartan-II design efforts - different > > > > devices may have different characteristics: > > > > > > > > If there's no way your enables can work properly with registered control logic > > > > (Tristate register in each IOB) then the only way to get your times is to > > > > "trick" the IOB elements into giving you the routing without the intermediate > > > > logic. If you use IOB registers for the data and need the fast clock-to-out > > > > times you may not have a choice. If your outputs are combinatorial or if you > > > > can push the output registers out of the IOB you have a chance. > > > > > > > > By using the IOB tristate register as a latch (which means the output must be a > > > > latch or combinatorial) you can work with the control signals for the data, the > > > > enable, and/or the reset to "effectively" provide the logic you're looking for, > > > > replicated in every output as a separate tristate latch. The result is that the > > > > signals don't have to get from the pads to the LUTs and back out to the pads but > > > > can go straight from pads to pads. This is especially helpful if your control > > > > and data signals are on the same side of the device. > > > > > > > > Any other attempts at improvement that I can come up with involve pin changes. > > > > > > > > - John > > > > > > > > Markus Meng wrote: > > > > > > > > > Hi all, > > > > > > > > > > actually I need to 'fight' with the UCF File in order to decrease the delay > > > > > until the databus buffers do open from tristate to drive. I have the > > > > > following: > > > > > > > > > > When dsp_csn AND dsp_rdn are low, then open the buffers of the data bus. > > > > > In the UCF-File I did the following setting, which seems impossible to meet. > > > > > Each time I get 7.5..8.3ns until the buffers are open and driving. > > > > > > > > > > Can anybody give me an advice how I can improve it, without changing the > > > > > pinning or the layout... > > > > > > > > > > -- Snip ucf-file > > > > > > > > > > ############################################################################ > > > > > ### > > > > > ## Trying to make to Main DSP output enable faster. 24.01.2002 -mm- > > > > > TIMEGRP DSPRdDataPath = PADS(dsp_data(*)); > > > > > ############################################################################ > > > > > ### > > > > > > > > > > TIMEGRP DSPRdCtrlPath = PADS(dsp_csn : dsp_rdn); > > > > > > > > > > ############################################################################ > > > > > ### > > > > > > > > > > TIMESPEC TSP2P = FROM : DSPRdCtrlPath: TO : DSPRdDataPath : 6ns; > > > > > ############################################################################ > > > > > ### > > > > > > > > > > ############################################################################ > > > > > ### > > > > > ## Increase Driver Strength 25.01.2002 -mm- > > > > > NET "dsp_data(*)" IOSTANDARD = LVTTL; > > > > > NET "dsp_data(*)" DRIVE = 12; > > > > > -- Snip ucf-file > > > > > > > > > > markus > > > > > > > > > > -- > > > > > ******************************************************************** > > > > > ** Meng Engineering Telefon 056 222 44 10 ** > > > > > ** Markus Meng Natel 079 230 93 86 ** > > > > > ** Bruggerstr. 21 Telefax 056 222 44 10 ** > > > > > ** CH-5400 Baden Email meng.engineering@bluewin.ch ** > > > > > ******************************************************************** > > > > > ** Theory may inform, but Practice convinces. -- George Bain ** > > > > -- > > --Ray Andraka, P.E. > > President, the Andraka Consulting Group, Inc. > > 401/884-7930 Fax 401/884-7950 > > email ray@andraka.com > > http://www.andraka.com > > > > "They that give up essential liberty to obtain a little > > temporary safety deserve neither liberty nor safety." > > -Benjamin Franklin, 1759 -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759Article: 38829
Russell Shaw wrote: > > > It can be windows related. I've had problems with win95 which > went away with win2k. Are you using the latest maxplus2/leonardo? I realize that your explanation might explain the problem. I have always been wondering why the latest version of Netscape 4.7x still crashes occasionally when I ran it with my Windows 98 PC. I once had a chance to use Windows 2000 with Netscape 4.7x for fairly long time, and I expected it to still crash occasionally, but the browser never crashed. So far, I have never seen Netscape 4.7x crash when using Windows 2000. From that, I concluded that the reason Netscape crashes is not because Netscape 4.7x is buggy, but it is all Windows 98's fault. The reason I don't feel like upgrading to Windows 2000 or Windows XP Home Edition is because I don't feel like paying more Microsoft tax to Microsoft. I got my current Windows 98 PC for free from FreePC.com (Another Idealab! business failure, but was a nice deal for me because as a result I got a free computer.), therefore, I didn't have to pay a dime to Microsoft to acquire Windows 98. I realize that Windows 98 is so horrible, but still ISE WebPACK 4.1 rarely crashes, so there should be no excuse for a buggy software. The version of LeonardoSpectrum-Altera I used was Ver. 2001_1a_28_OEM_Altera from an Altera Digital Library CD-ROM. I have MAX+PLUS II-BASELINE 10.0 installed, but I now rarely use it. I instead use Quartus II 1.1 Web Edition when I deal with Altera devices. I think Quartus II 1.1 Web Edition is a much better tool than MAX+PLUS II-BASELINE. I preferred if Quartus II 1.1 Web Edition supported ACEX 1K, but the Quartus II 1.1 Web Edition doesn't (Virtually equivalent FLEX10KE is supported though.). Kevin Brace (Don't respond to me directly, respond within the newsgroup.)Article: 38830
I am not sure if ISE WebPACK 4.1 supports schematics (I use HDL only, so I don't really know.), but perhaps you may want to upgrade to ISE WebPACK 4.1. You can download a free copy from Xilinx website (12-hours download for a full version by a 56K modem.) or order a free CD (actually CD-R) from Insight Electronics. http://208.129.228.206/solutions/kits/xilinx/webpack/ Either way, you won't have to pay anything to use the software. ISE WebPACK 4.1 supports all Spartan-II devices. Kevin Brace (Don't respond to me directly, respond within the newsgroup.)Article: 38831
>What does the fpga editor show, and what do you need it for? It's a disassembler and microscope. It lets you see what the placer and router did to your design and/or what the previous tools did to your source code before the P+R tools got a chance to mangle it. "need" is a funny word in this context. If all the tools were wonderful, you wouldn't need it. If all the other tools are working correctly and your design isn't pushing things to the limit you don't need it. If you don't know how to use it then you don't "need" it (yet?). But if something isn't meeting timing and you know what the inside of the chip is like, it's often easy to find the problem using FPGA Editor. Sometimes you can fix it too. The FloorPlanner covers most of the problem area now. In the old days the editor was much more important. -- These are my opinions, not necessarily my employer's. I hate spam.Article: 38832
>> When starting, it's probably simpler to read the raw-bits >> file. Then you don't have to worry about which end of the >> byte goes out first. > >Is this the difference to Xilinx conf. memories? Same data in the file, just a different file format. The raw-bits format is ASCII - text characters for 0s and 1s. It's pretty unlikely that you will get the bits-in-a-byte swapped. -- These are my opinions, not necessarily my employer's. I hate spam.Article: 38833
Just a minor comment about Quartus and Leonardo (2001_1d) on win2k I did a few benchmarks of the 6502 core at www.free-ip.com Using native Quartus the synthesised design was 2000+ LE at 29MHz, using Leonardo instead with Altera's place and Route brought this to 998LE and 42MHz I've done other benchmarks of substantial cores with similar results (well except for the 8051 synthesisable core which always crashes Quartus no matter what) I've found Leonardo to have a few rough edges but only one crash. Quartus 2 on the other hand is still as reliable as a custard castle in a sandstorm. I love the way it crashes midway through updating your design files to trash them completely. So Quartus 2 1.1 SP2 handle with care. Having said all that I did evaluate the Xilinix web pack and found its user interface much more clunky, and the amount of manual fiddling while great for the last ounce of performance is far more of a chore for regular run-of-the-mill designs. Bottom line is I still fight the daily battle with the custard castle monster and am getting quite good at predicting/avoiding the crashes. Paul > > It can be windows related. I've had problems with win95 which > > went away with win2k. Are you using the latest maxplus2/leonardo? > > I realize that Windows 98 is so horrible, but still ISE WebPACK > 4.1 rarely crashes, so there should be no excuse for a buggy software. > The version of LeonardoSpectrum-Altera I used was Ver. > 2001_1a_28_OEM_Altera from an Altera Digital Library CD-ROM. > I have MAX+PLUS II-BASELINE 10.0 installed, but I now rarely use it. > I instead use Quartus II 1.1 Web Edition when I deal with Altera > devices.Article: 38834
Kevin Brace wrote: > > Russell Shaw wrote: > > > > > > It can be windows related. I've had problems with win95 which > > went away with win2k. Are you using the latest maxplus2/leonardo? > > I realize that your explanation might explain the problem. > I have always been wondering why the latest version of Netscape 4.7x > still crashes occasionally when I ran it with my Windows 98 PC. > I once had a chance to use Windows 2000 with Netscape 4.7x for fairly > long time, and I expected it to still crash occasionally, but the > browser never crashed. > So far, I have never seen Netscape 4.7x crash when using Windows 2000. > From that, I concluded that the reason Netscape crashes is not because > Netscape 4.7x is buggy, but it is all Windows 98's fault.] I'm using Netscape 4.75. I found that if mail or communicator suddenly gets slow, i exit out and open the windows task manager, and kill the netscape task which is still running for some reason. Netscape goes ok when you open it again. > The reason I don't feel like upgrading to Windows 2000 or > Windows XP Home Edition is because I don't feel like paying more > Microsoft tax to Microsoft. > I got my current Windows 98 PC for free from FreePC.com (Another > Idealab! business failure, but was a nice deal for me because as a > result I got a free computer.), therefore, I didn't have to pay a dime > to Microsoft to acquire Windows 98. > I realize that Windows 98 is so horrible, but still ISE WebPACK > 4.1 rarely crashes, so there should be no excuse for a buggy software. > The version of LeonardoSpectrum-Altera I used was Ver. > 2001_1a_28_OEM_Altera from an Altera Digital Library CD-ROM. > I have MAX+PLUS II-BASELINE 10.0 installed, but I now rarely use it. > I instead use Quartus II 1.1 Web Edition when I deal with Altera > devices. > I think Quartus II 1.1 Web Edition is a much better tool than MAX+PLUS > II-BASELINE. > I preferred if Quartus II 1.1 Web Edition supported ACEX 1K, but the > Quartus II 1.1 Web Edition doesn't (Virtually equivalent FLEX10KE is > supported though.). > > Kevin Brace (Don't respond to me directly, respond within the > newsgroup.)Article: 38835
Hi, I would suggest to do the assignement which works best for the PCB. The automatical assignement in Max+ is a little bit strange (at least for me). In my projects I had no problem with assignements even when an 1K30 was filled to 95% and compile time was shorter with assigened pins. Martin -- working an a Java processor soft core: http://www.jopdesign.com -- "Jeroen Van den Keybus" <vdkeybus@esat.kuleuven.ac.be> schrieb im Newsbeitrag news:1011993874.881445@seven.kulnet.kuleuven.ac.be... > Hello, > > I want to connect an EP1K100 ACEX to a 32-bit host which will access it > asynchronously. So there is a 32 bit data bus and a 16 bit address bus and > some control signals (nWE, nRD, nCS). A colleague of mine will be designing > the PCB for it and he would like to start routing asap even while the FPGA > software is still being written. So he wants to have a complete pin > assignment already. As a matter of fact, the FPGA software will probably > often be rewritten on the same PCB to accomodate different lab setups. > > My question: are there any guidelines regarding the pin position or should I > rather have him (my colleague) define the pinout for easiest layout. More > precisely, should we rather put D[0..31] and A[0..15] on column or row > interconnects (the ACEX will be written to and read from). We'd hate to see > the Max+2 fitter fail at the end of the month just because of stupid pin > positions we can't change anymore. > > Because of this issue we have already oversized the FPGA, normally max. > 60-70% of the LE's will be used. But apart from EMC guidelines stating that > 'bunching' of large groups af signals should be avoided, we have found > nothing more on this topic. > > Jeroen. > >Article: 38836
This is a multi-part message in MIME format. ------=_NextPart_000_0028_01C1A66A.9679E680 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Hi, anyone help with this? Got a design in RTL plus some Xilinx coregen form Xilinx 3.1iSP4. RTL simulates just fine. Done synth. and PAR. BUT, the PAR was Xilinx 3.3iSP8. Now trying to do post PAR simulation. It fails fairly dismally. Should I point my simprim library mapping at the compiled simprim = library that is compiled as follows: Option 1. Xilinx simprim 3.1i compiled by Modelsim 5.5b Option 3. Xilinx simprim 3.3i compiled by Modelsim 5.5b Option 1. Xilinx simprim 3.3i compiled by Modelsim 5.5d I should add that we are now using MS5.d for all sims. I should that then when an older model was simulated with Xilinx 3.1i = and MS5.5b used everywhere, then the post PAR sim worked OK. TIA, Niv.Article: 38837
See amendment to options!! Sorry niv <niv@ntlworld.com> wrote in message = news:BWx48.27902$ka7.4621195@news6-win.server.ntlworld.com... Hi, anyone help with this? Got a design in RTL plus some Xilinx coregen form Xilinx 3.1iSP4. RTL simulates just fine. Done synth. and PAR. BUT, the PAR was Xilinx 3.3iSP8. Now trying to do post PAR simulation. It fails fairly dismally. Should I point my simprim library mapping at the compiled simprim = library that is compiled as follows: Option 1. Xilinx simprim 3.1i compiled by Modelsim 5.5b Option 2. Xilinx simprim 3.3i compiled by Modelsim 5.5b Option 3. Xilinx simprim 3.3i compiled by Modelsim 5.5d I should add that we are now using MS5.d for all sims. I should that then when an older model was simulated with Xilinx 3.1i = and MS5.5b used everywhere, then the post PAR sim worked OK. TIA, Niv.Article: 38838
Hello, When I have Peaks in my Simulation in an small PLD (50% full) and not in an bigger PLD (20% full) whats wrong ? Bad programmed ? Is 50% too full ? Must something optimized ? Thanks Martin.Fischer@fzi.deArticle: 38839
"Martin Fischer" <uplx@rz.uni-karlsruhe.de> schrieb im Newsbeitrag news:a2uame$9k2$1@news.rz.uni-karlsruhe.de... > Hello, > > When I have Peaks in my Simulation in an small > PLD (50% full) and not in an bigger PLD (20% full) What do you mean with peaks?? small pulses (glitches) on data signals. This is just normal an ca be ignored in 99% of all cases. But your clock signals must be absolutely glitch free. > whats wrong ? Bad programmed ? Is 50% too full ? No, its just some race conditions between different input signals of a decoder. This can (an in most cases will) cause the decoder output to inhibit a glitch. > Must something optimized ? No. As long as these signals are no clock signals. -- MfG FalkArticle: 38840
"Russell Shaw" <rjshaw@iprimus.com.au> schrieb im Newsbeitrag news:3C523652.20E70608@iprimus.com.au... > > What does the fpga editor show, and what do you need it for? It shows the low level schematics of your FPGA, with all details of the CLBS, all routing etc. In general, you need this only for Maximum performance trims of your design. The average user dont need it at all. But it gives you nice insight into the architecture and how to do good designs. Hmm, maybe the average user needs this too??. -- MfG FalkArticle: 38841
"Russell Shaw" <rjshaw@iprimus.com.au> schrieb im Newsbeitrag news:3C5238A9.8129D1B6@iprimus.com.au... > > Ok, if you want to check that any latches have been > generated without staring at each line of vhdl, how > would you do that in the xilinx tools? This is easy, just look at the map report. There you can see wheter there are latches or not. If there are some, just go into the floorplanner (no place and route neccessary) and have a look at the primitives. The latches should have the same names like the signal in the VHDL code (as well as FlipFlop, which are not renamed during synthesis) -- MfG FalkArticle: 38842
Hi, Kevin, Thank you very much for so helpful information. > Assuming you are going to eventually develop your own PCB, even > if you decide to write your own PCI IP core, it might still be a good > idea for your PCB to use the exact same pinout as Xilinx LogiCORE PCI's > Spartan-II PQ208 package so that even if you decided to suddenly switch > back to Xilinx LogiCORE PCI, you won't have to change the PCB. > Insight Electronics Spartan-II PCI Development Kit's schematic which you > can download from the above mentioned URL tells you that without getting > a license from Xilinx. Do you mean the pinout used in the Insight PCI board is the same as defined in the LogiCORE PCI? Actually, what I'm going to do is to design a board with FPGA + MCU. The FPGA I'm going to use is XC2S200 -6. I haven't decided which package to use. The MCU will be StrongARM SA1110 or equivalent with 128MB SDRAM populated. This board will be designed first for DSP or communication application. However, since XC2S200 can support PCI. I'll make the PCI connector available as well for later usage. I'm interested in the PCI-Wishbone bridge of OpenCores. As the PCI design is just a backup function, I'm not in a hurry. I just want to make the connection ready, without actually developing or testing the PCI core at the moment. But I'm sure I will use the PCI later. This board is for research and academic purpose. It means, after I've done it, its design will be available from my web server. With regards, DavidArticle: 38843
Anyone got any tips about web sites/newsgroups discussing problems/tips with Altera devices and tools. Obviously there is a certain amount of excellent information here and in c.l.vhdl, but I was surprised that there wasn't an altera newsgroup or similar bulletin board for users to discuss things. Altera do respond to specific service requests as best they can, but I think I'd gain alot from discussions with other users etc. PaulArticle: 38844
Hal Murray wrote: > >What does the fpga editor show, and what do you need it for? > > It's a disassembler and microscope. It lets you see what the > placer and router did to your design and/or what the previous > tools did to your source code before the P+R tools got a > chance to mangle it. > > "need" is a funny word in this context. If all the tools > were wonderful, you wouldn't need it. If all the other > tools are working correctly and your design isn't pushing > things to the limit you don't need it. > > If you don't know how to use it then you don't "need" it > (yet?). > > But if something isn't meeting timing and you know what > the inside of the chip is like, it's often easy to find > the problem using FPGA Editor. Sometimes you can fix it > too. > > The FloorPlanner covers most of the problem area now. > In the old days the editor was much more important. > > -- > These are my opinions, not necessarily my employer's. I hate spam. FPGAEditor is also the only way to create hard macros, but, once again, these are really needed only when pushing the limits of density and/or speed. As Hal says FPGAEditor is in that class of tools that 95% of the time you don't need but for the other 5% its only thing that's going to save you.Article: 38845
niv wrote: > See amendment to options!!Sorry > > niv <niv@ntlworld.com> wrote in message > news:BWx48.27902$ka7.4621195@news6-win.server.ntlworld.com...Hi, > anyone help with this? Got a design in RTL plus some Xilinx > coregen form Xilinx 3.1iSP4. RTL simulates just fine.Done > synth. and PAR. BUT, the PAR was Xilinx 3.3iSP8.Now trying > to do post PAR simulation. It fails fairly dismally. > Should I point my simprim library mapping at the compiled > simprim library that is compiled as follows: Option 1. > Xilinx simprim 3.1i compiled by Modelsim 5.5bOption 2. > Xilinx simprim 3.3i compiled by Modelsim 5.5bOption 3. > Xilinx simprim 3.3i compiled by Modelsim 5.5d I should add > that we are now using MS5.d for all sims. I should that then > when an older model was simulated with Xilinx 3.1i and > MS5.5b used everywhere, then the post PAR sim worked > OK. TIA, Niv. > How does the post PAR sim fail ? It might not be anything to do with the simprim libs, you might have got caught out by a speed file change between SP4 and SP8. IIRC there was a fairly big change to the BlockRAM timings somewhere along the SP line, together with a lot of legitimate complaints about it on this NG.Article: 38846
Hi DG, I am getting to the point where I need to nail down my approach to the debug and programming of the MSP430 and other parts on the board and I thought I would contact you to see if you have gotten anywhere with this. My application is a little different from yours. I won't be using the JTAG chain to program the Xilinx parts. I will need to use the JTAG connector to program the MSP430 and to debug it as well as the TMS320C6711, one at a time, of course. The pinout of the two connectors are not the same, but the JTAG pin functions are similar except for two extra on the MSP430. There is a RST signal that seems to be needed and a power signal that may be optional, I'm not sure. I think I can wire a JTAG connector to support all the signals for the C67 DSP (as well as any other TI DSP) with two extra pins. These would provide the RST and Vcc signals that the MSP430 emulator needs. Then an adaptor would be used to connect the IAR KickStart emulator. All four of my JTAG chips would be in the same chain. In my case, this should work assuming that I can get both the DSP and MSP emulators to work with a scan chain. I have been told by TI DSP support that Code Composer does support other devices in the scan chain. I also saw in the setup for my C3x version of Code Composer where you can add "Bypass" as a device in the chain. So I can probably get that to work. But I have the TI support team working on the question about whether the MSP430 emulator can do the same thing. I have also ordered the Flash Emulation Tool, so I will be able to check it out myself in a few days. I belive this includes the same emulation HW/SW are the IAR toolset. It is just limited in program size. I have no plans to download the Xilinx parts via the JTAG cable. One part is SRAM based, so it has to be loaded at power up anyway and that will be done by the DSP. I will have to write code to do that. The other part is a Flash part, but it will also be loaded by the DSP when updated. This board will be fully upgradable in the field for both gateware and firmware via the firmware/RS-232 connection. That is what I know. Have you explored any further? DG_1 wrote: > > Thanks again rickman, > Yss, I know, 'God' has little with human's flabbergastings. That's > just an expression, sorta "hit-and-miss' situation. > Anyways, I haven't seen any ability to add BSDL files to IAR's > IDE. That practically menas that MSP430 has to have a separate > JTAG connector, or even somehow 'multiplexed' let's just say > for now I would keep Xilinx and MSP430 chain separated. > > Well, neither Xilinx's JTAG programmer nor IAR's downloader > doesn't have compatible connectors, so here I need some > brainstorming on how to use one connector with two different > JTAG tools. > So, for now I have two solutions: > 1) To have separate chains (and to use both JTAG tools > independently) and separate JTAG connectors. > 2) To chin-up both chips into one JTAG chain and use Xilinx > download tool with added MSP430 BSDL file. > (debugging MSP430 would be handled by simulator prior > to adding resulting binary file into downloader's list) > and testing both poppies as is, without having conveniance > of debugging MSP430 via JTAG. > > > I am also concerned with the same JTAG compatibility problem with the > > C67 DSP. I will need to contact TI about that. > Right. I haven't use C67 so far, but logic tells me that you will > encounter similar problem - how _your_ downloader is going > to distinguish between separate chips in the chain (without BSDL) > and how your debugger is going to perform function cause > JTAG is a serial stream of data, right? How debugger is going > to know where to start and to end the stream (without having BSDL). > > > Oh yeah, I also have to check with Xilinx since I will need to program > > the XCR3256 after the board is built. > Here you'll not have a 'serious' problem - Xilinx's downloader tool > have a nice GUI - you can click on icon of the chip, add the binary, > add additional chip(s) and even play around with timings (to some > extent). I love it. > > "rickman" <spamgoeshere4@yahoo.com> wrote in message > news:3C42921C.4727C00@yahoo.com... > > I don't think I would leave this "in God's hands". I would contact IAR > > and get the straight scoop. If they don't give you a way to include BDSL > > files for the other chips in the chain, I don't think it will work. If > > nothing else, it has to know how many chips are in the chain. I think > > there is a way to put each chip in a state where it looks like a single > > FF. This would be the simplest way for the IAR debugger to deal with the > > other chips. > > > > I only need the JTAG on the MSP430 for debugging of the code in > > development and I need something to let me program the MSP430 flash in > > production. They have a "boot monitor" that will work as a 9600 bps > > serial port which I may use. But they did not use the same pins as the > > actual serial port, so it will be a little tricky to get this going > > without using up too many pins on the MSP430. I am using every last IO > > pin. > > > > I am also concerned with the same JTAG compatibility problem with the > > C67 DSP. I will need to contact TI about that. > > > > Oh yeah, I also have to check with Xilinx since I will need to program > > the XCR3256 after the board is built. > > > > Too bad JTAG is not more widely supported across vendors. It always > > seems to have trouble when the chains are mixed. I just don't have the > > board space to have three separate JTAG connectors. > > > > > > > > DG_1 wrote: > > > > > > Thanks 'rickman'. > > > The only problem I 'foresee' is the fact that IAR Kickstart doesn't > > > have any capability of adding/editing BSDL files (or I missed > something). > > > Therefore, I don't see the way how to use debugging features > > > of IAR Kickstart when _more_ than one chip is in the same chain. > > > I guess, designers of IAR's tool-set didn't (intentionally?) > > > though-out that possibility. Also, I guess, from Xilinx's perspective, > > > having MSP430 in the same chain is no big deal, just add BSDL > > > file for TI's part to Xilinx's 'JTAG programmer' tool. > > > In the past I've used JTAG to chain-up devices (Xilinx, Lattice) > > > but always from the same manufacturer, I've never mixed-up > > > different chips, from different manufacturers, neither I added MPUs > > > into the chain. now I guess the only way to check it up is to make > > > the actual circuitry and then 'everything is in God's hands'. > > > (Well, the same problem is applicable to Atmel AVmega128 > > > to be chained-up with other JTAG-capable chips) > > > > > > "rickman" <spamgoeshere4@yahoo.com> wrote in message > > > news:3C414ACF.EA8194EA@yahoo.com... > > > > DG_1 wrote: > > > > > > > > > > Hi there, > > > > > Has anybody tried to chain-up a MSP430 with any of JTAG-capable > > > > > Xilinx chips and be able to programm both of them without problem(s) > > > > > (MSP430 via IAR KickStart, Xilinx via JTAG programmer)? > > > > > Or (re-arranged question):: > > > > > Does IAR Kick-Start still recognizes MSP430 and/or allows other > > > > > devices (other than MSP430) to be chained-up via JTAG? > > > > > > > > > > Thanks in advance, > > > > > -- D.G. > > > > > > > > > > > > I will be doing exactly this in a month or two. I am building a board > > > > with a TMS320C6711, an MSP430F148/9, an XC2S150E and an XCR3256XL all > in > > > > one JTAG chain. Actually, I may leave the MSP430F148/9 out of the > chain > > > > depending on the answers to the questions I will be asking the > vendors. > > > > But I really want the rest of it in a single chain so that I can do > > > > boundry scan testing on it all. The MSP430F148/9 will not be quite so > > > > integrated into the rest of the board, so it does not have to be > tested > > > > that way. It is also important to be able to burn software into it > > > > regarless of the state of the board. This will be used for initial > board > > > > test too. I am even considering using the MSP430F148/9 as a JTAG > > > > interface for the JTAG chain. But we will see if I can get it all to > > > > work together. > > > > > > > > If you have any results yourself, please let me know. Thanks! > > > > > > > > -- > > > > > > > > Rick "rickman" Collins > > > > > > > > rick.collins@XYarius.com > > > > Ignore the reply address. To email me use the above address with the > XY > > > > removed. > > > > > > > > Arius - A Signal Processing Solutions Company > > > > Specializing in DSP and FPGA design URL http://www.arius.com > > > > 4 King Ave 301-682-7772 Voice > > > > Frederick, MD 21701-3110 301-682-7666 FAX > > > > > > > > -- > > > > Rick "rickman" Collins > > > > rick.collins@XYarius.com > > Ignore the reply address. To email me use the above address with the XY > > removed. > > > > Arius - A Signal Processing Solutions Company > > Specializing in DSP and FPGA design URL http://www.arius.com > > 4 King Ave 301-682-7772 Voice > > Frederick, MD 21701-3110 301-682-7666 FAX > > -- Rick "rickman" Collins rick.collins@XYarius.com Ignore the reply address. To email me use the above address with the XY removed. Arius - A Signal Processing Solutions Company Specializing in DSP and FPGA design URL http://www.arius.com 4 King Ave 301-682-7772 Voice Frederick, MD 21701-3110 301-682-7666 FAXArticle: 38848
"Deli Geng (David)" wrote: > > Hi, Kevin, > > Thank you very much for so helpful information. > > > Do you mean the pinout used in the Insight PCI board is the same as defined > in the LogiCORE PCI? > I am pretty sure about that. Most users who prototype their PCI designs with Insight Electronics Spartan-II PCI Development Kit seem to use Xilinx LogiCORE PCI32. Furthermore, Insight Electronics Spartan-II PCI Development Kit reference design contains a UCF file that seems identical to the UCF file a licensed user of Xilinx LogiCORE PCI can download from Xilinx LogiCORE PCI Lounge. To gain access to Insight Electronics reference designs, you will have to purchase the product first, and then obtain a password from them. If you wanted to use Fine-Pitch BGA package, I still think you should try to use the same pinout Xilinx uses for Xilinx LogiCORE PCI. Perhaps you may want to post a question to this newsgroup, and hopefully someone knows the Xilinx LogiCORE PCI pinout for Spartan-II FG256 and FG456 package. I guess what I am suggesting sounds like "socket stealing" which AMD or Cyrix did with Socket 7 bus. I think it is not a good idea to pick your own PCI pinout because if Address/Data bus signals like AD[31:0] are located too far away from timing critical control signals like FRAME#, IRDY#, or TRDY#, you may not meet even 33MHz PCI setup timings (Tsu < 7ns) I think meeting Tsu is the hardest part of a PCI design. > Actually, what I'm going to do is to design a board with FPGA + MCU. The > FPGA I'm going to use is XC2S200 -6. I haven't decided which package to > use. The MCU will be StrongARM SA1110 or equivalent with 128MB > SDRAM populated. > > This board will be designed first for DSP or communication application. > However, since XC2S200 can support PCI. I'll make the PCI connector > available as well for later usage. > I forgot the details, but I thought Avnet (another Xilinx distributor) had a Compact PCI card that contains a StrongARM (I thought it was SA1110, but not sure.) and Spartan-II XC2S100. In Xcell Journal Issue 39, a firm called ADI Engineering (http://www.adiengineering.com) developed a XScale (Intel 80200) based evaluation board with a PCI bridge. The article says the PCI part of the design used Xilinx LogiCORE PCI64. I don't remember the amount of RAM, but I don't think it was anywhere close to the RAM you are talking about. I don't think either development board has the amount of RAM you want, but it might not be bad to pick up one for some early development before you design your own PCB. You can still download a PDF version of past Xcell Journal articles from Xilinx. > I'm interested in the PCI-Wishbone bridge of OpenCores. As the PCI design is > just a backup function, I'm not in a hurry. I just want to make the > connection ready, without actually developing or testing the PCI core at the > moment. But I'm sure I will use the PCI later. > One thing I noticed is that the type of PCI design you seem to be talking about sounds like you want a Host PCI bridge, not just a PCI initiator design. In my opinion, Host PCI bridge requires more work than just a PCI IP core with initiator (bus master) feature since the Host PCI bridge will have to arbitrate the PCI bus (has to control GNT#). In a PCI IP core with initiator feature, the PCI IP core doesn't really have to worry about bus arbitration other than when the PCI IP core wants to become a bus master or when the Host PCI bridge is parking the bus at the PCI IP core (Bus Parking). > This board is for research and academic purpose. It means, after I've done > it, its design will be available from my web server. > > With regards, > > David Okay, if you didn't happened to have access to good FPGA EDA tools, ISE WebPACK 4.1 should be okay as long as you stick with Spartan-II. If you didn't care about 5V PCI, you also have an option to use recently announced Spartan-IIE, which supports only 3V PCI (Like Virtex-E, no 5V PCI support.). Kevin Brace (Don't respond to me directly, respond within the newsgroup.)Article: 38849
Thanks for the additional comments. I've discovered a few things since my last post: The lines I was using as inputs to the FPGA are at least part of the problem - they seem to be too weak. I added non-inverting buffers and it *almost* works. Strangely, the FPGA seems to be registering positive clock edges where it shouldn't. Sometimes this is on a negative edge, sometimes not. It seems like the clock is too noisy, but it looks pretty clean on the oscilloscope. I tried creating a Schmitt trigger using the trick with two resistors and an extra I/O pin, and while I'm not positive that I did it correctly, it made no apparent difference in what I saw on the output pins using a logic analyzer. I created an even simpler design just to see when the FPGA was registering a positive clock edge by toggling a pin on each positive edge, something like this: assign output_wire = output_reg; always @(posedge clk) output_reg = output_reg ^ 1; This fails even worse than the shift register. The output wire usually just mimics the clock, going high on the positive edge and low on the negative edge. Occasionally it seems to "miss" an edge and remain unchanged. But when it goes high, it's always on the positive edge, never the negative, and when it goes low it's always on the negative edge, never the positive. So I'm puzzled. -Kevin
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z