Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Hi, For the first time, I'm working with differential clock inputs on the FPGA and I'd like to know how to properly constrain them. Currently what I put in my ucf is (for a clkin clock with a 200 MHz frequency) : TIMESPEC "TS_clkin" = PERIOD "clkin" 200 MHz HIGH 50%; NET "clkin_p" TNM_NET = "clkin"; NET "clkin_n" TNM_NET = "clkin"; Is this correct ? Or should I define two timespec with an offset of half a period, or just constraint the clkin_p ? I also have two others questions : * How to specify a maximum skew between multiple signals and also an offset in/out compared to a clk output. (i/e I output a clock using DDR FF and I want the related signals to be for example 1 to 2 ns after that clock). * Finally, in my ucf I put lines like NET "leds<*>" IOSTANDARD=LVCMOS33 | SLEW=SLOW | DRIVE=24; but I get *tons* of warning about putting the IOSTANDARD and SLEW and DRIVE attribute on the "wrong type of object". What's the problem here ? Thanks, SylvainArticle: 103401
Yea, what I had my eye on was svn list --verbose {file/dir name} Conveniently, this lists the file/dir's revision # first. My lack of understanding lies on the Quartus side. Is it possible to run a tcl script before all compilations using the GUI tools? I don't mind myself using the command line, but might have a hard time converting over all the team members. Kevin toby wrote: > Mike Treseler wrote: > > kevinwolfe@gmail.com wrote: > > > Our group is using Subversion. What > > > I am hoping to do is to incorporate the Subversion-stored revision > > > number into the actual design (working on an FPGA). I would love to > > > grab the design's Rev. #, store it as a constant in some register map, > > > which is readable by another system (i.e. seamless, automatic firmware > > > version control). > > > > Maybe subversion fills in revision headers like CVS. > > Have a tcl or bash script grab > > the revision header line, say > > -- $Revision: 1.42 $ > > and convert it to a vhdl package file > > with a vector constant, say > > constant revision_c : byte_t := x"8e"; > > that becomes readback data. > > It doesn't; this is a deliberate design decision. > > However, there are several ways of finding revision data from the > command line[1] (or makefile). The main gotcha here is that your build > might quite easily be from a 'mixed revision' working copy, if you have > selectively updated it. > > [1] see the Subversion book > http://svnbook.red-bean.com/nightly/en/svn.ref.html#svn.ref.svn or > built-in help for command summaries. > > > > > -- Mike TreselerArticle: 103402
Hi, I'm working with Spartan3 and ISE 7.1, simulating a small VHDL test program I noted that signal updates within a process are done on clock rising edges, while cuncurrent assignments, out of the process, are done along falling edges. Is it all correct? Thanks, MarcoArticle: 103403
Xilinx gurus, I want to use a ChipScope ILA or 2 in an EDK design. I know I have 2 options for ChipScope: Use the ISE and add a new source and make a ChipScope source/file, or use the ChipScope Core Inserter (used with an ise command line flow), and give the inserter paths to the appropriate ng* files. Then remap place route etc. However I would really like to use ChipScope with the EDK flow. Certainly the only option here is probably to build my design completely, so that all the netlists are created, then run the ChipScope core inserter, insert the ILA cores into the netlists, and then go back to EDK, and rerun the hw build (and hope it doesn't re synthesize) OR run the rest of the back end with command line scripts. Since the EDK generates some internal logic to include the PPC in the JTAG chain though, I wonder if this is going to mess up the ChipScope JTAG stuff? Also, I was thinking that making my design in EDK completely, and then when its ready, changing it into an ISE flow, and everything that entails, and then just inserting the ChipScope core like would be done with any ISE flow. I would rather avoid doing this if possible but maybe I will have to. What say you Xilinx gurus out there? -JoelArticle: 103404
Rather than creating a constant in a package from the source version info, you might be able to use a top level generic, and pass the value for that generic in via the synthesis command line in your build script? Andy JonesArticle: 103405
XST doesn't seem to mind that I'm driving two DCMs with a BUFG. Are there any implications of this? I'm using the FX output on both DCMs, one clock domain to capture digitized data, the other to process it after a different rate after an asychronous FIFO. Thanks, -BrandonArticle: 103406
Hi there, I've just left a computer compiling the SIMPRIM libraries used by ModelSim for Low-Level (post place and route) simulation (I started this using the "Compile HDL Simulation Libraries" process in the ISE software). When the process started, I noticed it said the following: Scheduling library compilation for Virtex-II This worries me because I want to run post-place and route simulations for other device families as well (Spartan 3, Virtex 4). Will the results generated for these other chips be incorrect because I have used SIMPRIM libraries compiled for Virtex2? Or, to put it another way, do i have to recompile the SIMPRIM libraries every time I switch from one family to another? Thanks in advance for your help. Regards, Jonathan ClarkeArticle: 103407
Hello, I need to snooping the ethernet data so that corresponding data is send to corresponding link. I mean one input and two output link (Ethernet). So only way to implement is to buffer the whole ethernet packet and look for the VLAN address and send it to the corresponding path?Is there any other way ? This all need to be implemented in FPGA. Any help will be great appreciated. Thanks and regards PraveenArticle: 103408
In comp.arch.fpga pinku <praveenkumar.bm@gmail.com> wrote: >Hello, >I need to snooping the ethernet data so that corresponding data is send >to corresponding link. I mean one input and two output link (Ethernet). >So only way to implement is to buffer the whole ethernet packet and >look for the VLAN address and send it to the corresponding path?Is >there any other way ? This all need to be implemented in FPGA. 100 Mbps _hubs_ comes to mind. They will output the same data to all ports..Article: 103409
Hi Joel, "Joel" <jceven@gmail.com> wrote in message news:1149173927.595412.96800@h76g2000cwa.googlegroups.com... > Xilinx gurus, > I want to use a ChipScope ILA or 2 in an EDK design. > ... > What say you Xilinx gurus out there? What version of EDK are you using? To the best of my knowledge, the chipscope ILA, IBAs and VIO modules are available in the EDK IP catalog (under "Debug"). You can drop these in just like any other IP block (you need to add a chipscope "ICON" block and wire that up too). Not sure whether the debugger and chipscope will co-exist on the same JTAG chain (I'm fairly sure they won't). I believe you can get round this by means of a debugger connection over RS232, using an XMD "stub" - but don't quote me on that; I never did it! Cheers, -Ben-Article: 103410
Hi, If I have a design module port that goes to both rising&falling edge FF, how do I constraint it in FPGA? In ASIC STA, I can use set_input_delay 0.2 -max -clock clk -clock_fall a set_input_delay 0.2 -add -max -clock clk a However, I don't see the clock_fall constraint from FPGA TAN. Best regards, ABCArticle: 103411
rickman wrote: > No, please don't be shy. Anyone can make any sort of stupid error and > this is a very likely one. Right now I don't have access to everything > in this process. I designed the board, someone else wrote the software > and a third party (group actually) generated the bit stream. It is > entirely possible that we have multiple or no swaps of the bit order in > the data between the design and the FPGA. But if that were the case, I > believe we would see an error flagged by the INIT_B pin. > not sure about that, if the byte is bit swapped, then the sync word will not be captured, which means the configuration process will not start, which means CRC error will not be flagged by INIT_B just ask to software guys to swap the bits and see what's going on. Aurash > > Greg Neff wrote: > >>On 31 May 2006 05:39:42 -0700, "rickman" <spamgoeshere4@yahoo.com> >>wrote: >> >> >>>I'm having trouble configuring a Spartan 3 in parallel slave mode. The >>>mode pins are set with M0 tied to GND and M1,M2 left pulled up >>>internally to 2.5 volts. I verified these voltages with a meter. >>> >>>I am driving PROG_B low for a few microsecs then high. The DONE and >>>INIT_B signals go low and INIT_B goes high again. I set RDWR_B low and >>>start clocking data in by setting CS_B low and using the WR_N signal >> >>>from the DSP to clock CCLK on the trailing edge. I have looked at all >> >>>of these signals with the scope and they look clean and have good >>>timing. But no matter how much data we clock into the Spartan 3, we >>>never see either INIT_B go low or DONE go high. DONE has a 10 kohm >>>pullup resistor to 2.5 volts and a buffer converts this signal to 3.3 >>>volts for the DSP. >>> >>>I do see something odd on the BUSY pin. It goes high for some 800 ns. >>>The data sheet says I don't even have to monitor this pin if I keep the >>>CCLK rate below 50 MHz. We are clocking data into the part at less >>>than 10 MHz. >>> >>>Any idea what could be wrong? >> >>We use this mode and it works fine. >> >>At the risk of asking a stupid question (please don't take it the >>wrong way) did you remember that the confriguration data port signal >>names use IBM nomenclature? In other words, D0 is the most signficant >>bit... >> >>================================ >> >>Greg Neff >>VP Engineering >>*Microsym* Computers Inc. >>greg@guesswhichwordgoeshere.com > >Article: 103412
I am using Xilinx ISE 8.1i on Linux. I have a bunch of Verilog and project files that I need to check in to our version control system (Perforce) so others can work on them as well. How do file paths in the project files are handled? For example, when I work on the project the files are under /home/myname/... but when somebody else opens the project he sees the files under /home/othername/... Can I have relative references from the project file to the source files? Any other suggestion? Thanks, JoeArticle: 103413
Roger wrote: > I'm planning on using the TXPOLARITY and RXPOLARITY attributes in the RIOs > on a Virtex-II Pro device (via Aurora cores) to swap the function of the N > and P lines in order to help improve the path of PCB tracks to an edge > connector. > > Has anyone used this polarity swapping capability before? Does it work OK? > Are there any "undocumented features" or pitfalls that I should know about > before designing the board in this way? > There is no difference when asserting the TXPOLARITY and RXPOLARITY inputs on the Virtex-II Pro MGTs other than a logical inversion of the data stream to accommodate P/N pin swaps. The Aurora core includes detection logic in the receiver portion and will automatically switch the RXPOLARITY state if it needs to in the initialization state. The only "pitfall" to watch for is to make sure that you properly document your schematic so that when someone else picks it up a few years down the road they know that you did this intentionally and don't curse your name after spending 3 weeks debugging why the link didn't work. Ed McGettigan -- Xilinx Inc.Article: 103414
pbdelete@spamnuke.ludd.luthdelete.se.invalid wrote: > In comp.arch.fpga pinku <praveenkumar.bm@gmail.com> wrote: > >>Hello, >>I need to snooping the ethernet data so that corresponding data is send >>to corresponding link. I mean one input and two output link (Ethernet). >>So only way to implement is to buffer the whole ethernet packet and >>look for the VLAN address and send it to the corresponding path?Is >>there any other way ? This all need to be implemented in FPGA. > > > 100 Mbps _hubs_ comes to mind. They will output the same data to all ports.. > I think he wants to say VLAN routerArticle: 103415
Aurelian Lazarut wrote: > rickman wrote: > > No, please don't be shy. Anyone can make any sort of stupid error and > > this is a very likely one. Right now I don't have access to everything > > in this process. I designed the board, someone else wrote the software > > and a third party (group actually) generated the bit stream. It is > > entirely possible that we have multiple or no swaps of the bit order in > > the data between the design and the FPGA. But if that were the case, I > > believe we would see an error flagged by the INIT_B pin. > > > not sure about that, if the byte is bit swapped, then the sync word will > not be captured, which means the configuration process will not start, > which means CRC error will not be flagged by INIT_B > just ask to software guys to swap the bits and see what's going on. > Aurash > > > > Greg Neff wrote: > > > >>On 31 May 2006 05:39:42 -0700, "rickman" <spamgoeshere4@yahoo.com> > >>wrote: > >> > >> > >>>I'm having trouble configuring a Spartan 3 in parallel slave mode. The > >>>mode pins are set with M0 tied to GND and M1,M2 left pulled up > >>>internally to 2.5 volts. I verified these voltages with a meter. > >>> > >>>I am driving PROG_B low for a few microsecs then high. The DONE and > >>>INIT_B signals go low and INIT_B goes high again. I set RDWR_B low and > >>>start clocking data in by setting CS_B low and using the WR_N signal > >> > >>>from the DSP to clock CCLK on the trailing edge. I have looked at all > >> > >>>of these signals with the scope and they look clean and have good > >>>timing. But no matter how much data we clock into the Spartan 3, we > >>>never see either INIT_B go low or DONE go high. DONE has a 10 kohm > >>>pullup resistor to 2.5 volts and a buffer converts this signal to 3.3 > >>>volts for the DSP. > >>> > >>>I do see something odd on the BUSY pin. It goes high for some 800 ns. > >>>The data sheet says I don't even have to monitor this pin if I keep the > >>>CCLK rate below 50 MHz. We are clocking data into the part at less > >>>than 10 MHz. > >>> > >>>Any idea what could be wrong? > >> > >>We use this mode and it works fine. > >> > >>At the risk of asking a stupid question (please don't take it the > >>wrong way) did you remember that the confriguration data port signal > >>names use IBM nomenclature? In other words, D0 is the most signficant > >>bit... Nice catch! I had not thought of that! I'll check that this afternoon.Article: 103416
Ah... yes. IP Catalog. I feel really dumb now. It didn't occur to me to check that. Thanks! -Joel Ben Jones wrote: > Hi Joel, > > "Joel" <jceven@gmail.com> wrote in message > news:1149173927.595412.96800@h76g2000cwa.googlegroups.com... > > Xilinx gurus, > > > I want to use a ChipScope ILA or 2 in an EDK design. > > ... > > What say you Xilinx gurus out there? > > What version of EDK are you using? > > To the best of my knowledge, the chipscope ILA, IBAs and VIO modules are > available in the EDK IP catalog (under "Debug"). You can drop these in just > like any other IP block (you need to add a chipscope "ICON" block and wire > that up too). > > Not sure whether the debugger and chipscope will co-exist on the same JTAG > chain (I'm fairly sure they won't). I believe you can get round this by > means of a debugger connection over RS232, using an XMD "stub" - but don't > quote me on that; I never did it! > > Cheers, > > -Ben-Article: 103417
Jim wrote: Jim or Joe.... Let me know if you figure out how to do this. I think with the folder structure in ISE/EDK and the way Perforce handles things, it isn't easy. We don't have a solution yet either. It seems like too much trouble to make it work.Article: 103418
XMD and ChipScope DO coexist on the same JTAG chain. You can even cross-trigger, i.e. stop the processor on a Chipscopre trigger event or trigger ChipScope on a software breakpoint to get a complete picture of your hardware/software system at the time of the event. - Peter Ben Jones wrote: > Hi Joel, > > "Joel" <jceven@gmail.com> wrote in message > news:1149173927.595412.96800@h76g2000cwa.googlegroups.com... > >>Xilinx gurus, > > >>I want to use a ChipScope ILA or 2 in an EDK design. >>... >>What say you Xilinx gurus out there? > > > What version of EDK are you using? > > To the best of my knowledge, the chipscope ILA, IBAs and VIO modules are > available in the EDK IP catalog (under "Debug"). You can drop these in just > like any other IP block (you need to add a chipscope "ICON" block and wire > that up too). > > Not sure whether the debugger and chipscope will co-exist on the same JTAG > chain (I'm fairly sure they won't). I believe you can get round this by > means of a debugger connection over RS232, using an XMD "stub" - but don't > quote me on that; I never did it! > > Cheers, > > -Ben- > >Article: 103419
kevinwolfe@gmail.com wrote: > My lack of > understanding lies on the Quartus side. Is it possible to run a tcl > script before all compilations using the GUI tools? I don't mind > myself using the command line, but might have a hard time converting > over all the team members. This strikes me as more trouble than it is worth. Normally there are many source files in a project and each has a different revision number that could change daily. Product releases don't happen nearly that often because of QA testing. If I wanted to read back something, it would have to do with the overall product release date. I think I would take Andy's advice and just make this some kind of constant that I just type in. Then no one else has to know or worry about it. -- Mike TreselerArticle: 103420
In comp.arch.fpga Aurelian Lazarut <aurash@xilinx.com> wrote: >pbdelete@spamnuke.ludd.luthdelete.se.invalid wrote: >> In comp.arch.fpga pinku <praveenkumar.bm@gmail.com> wrote: >> >>>Hello, >>>I need to snooping the ethernet data so that corresponding data is send >>>to corresponding link. I mean one input and two output link (Ethernet). >>>So only way to implement is to buffer the whole ethernet packet and >>>look for the VLAN address and send it to the corresponding path?Is >>>there any other way ? This all need to be implemented in FPGA. >> >> 100 Mbps _hubs_ comes to mind. They will output the same data to all ports.. >I think he wants to say VLAN router Well that's one thing hubs won't do :) 3xPHY + FPGA should be able to fix this. Otoh, maybe a pc can handle it.Article: 103421
Adam Megacz wrote: > Any pointers? I've come across a rather depressing paper concluding > that this isn't possible on the (extremely old) XC4000, but I'm sort > of hoping against hope that things may have changed They haven't. Adding a clock and using standard synchronization techniques will be ten times easier that any alternative. -- Mike TreselerArticle: 103422
Marco wrote: > I'm working with Spartan3 and ISE 7.1, simulating a small VHDL test > program I noted that signal updates within a process are done on clock > rising edges, while cuncurrent assignments, out of the process, are > done along falling edges. > Is it all correct? For a functional sim all should change on the rising_edge. Maybe some of your processes are using falling_edge. -- Mike TreselerArticle: 103423
Thx for your response. I already did some initial testing using the pre-compiled BERT designs that came with the board. I'm trying to take a step ahead by playing around w/ the Aurora core, b/c I need to work with other protocols like PCIe (PCIe core) and flow control in the future. The reason I tried to feed a PRBS pattern into the MGT's is b/c thats what the BERT design used as a sample datastream. So, I'm guessing its not possible to use a datastream out of a external Pattern generator to test the Aurora design using the setup that I talked about. Is there a sample code available to create a stimulus signal to test the core using this setup. (Someone had suggested that I implement a counter and use that as input to the core. I'm still having some trouble w/ that b/c I'm still in the process of learning HDL and I'm not sure how to link that up w/ the Aurora core) I'm not sure if I understand your question about other logic being wrapped around the core to source and sink the Local link interface.(pardon me.. still a novice at FPGA design, HDL) . I just setup the core, compiled the design using xilperl and uploaded the bit file using IMPACT. I just made sure that the scope's termination setting is 50 ohm with AC coupling, so I'm not sure if thats the issue. The signal in the spectrum analyzer itself was too weak compared to the input signal. I'm in the process of veryfing this, but I think I'm getting the same output on spectrum analyzer when I feed the PRBS signal on MGT9 RX and check the output from MGT9 TX (even though sample design has been configured on MGT4) The other thing that bothers me is that I see no LED lights turning on when I download the bit file or when I feed a datastream. The only LED that blinks is the error light, and thats probably b/c I'm not using the Compactflash card. Should the CHANNEL_UP LED's start blinking when during the process? Thanks, Billu Ed McGettigan wrote: > billu wrote: > > Hi All, > > > > I have the following issues while trying to test a sample Aurora core. > > I generated a core w/ the following specs: > > HDL: Verilog, Lane: 1, Lane Width: 2, Interface: Streaming, Upper MGT > > Clock: BREF_CLK, Upper MGT clock on GT_X0Y1 (from ucf file, corresponds > > to MGT4 for a ML321 board) > > > > After using xilperl to compile the design files, I simulated it using > > Modelsim, and uploaded the bit file using Impact to the board. > > > > I'm trying to test the core by feeding a 3.125Gbps (default data rate > > based on onboard oscillator) PRBS signal onto MGT4(RXP & RXN). I test > > the output signal from MGT4(TXP & TXN) by connecting it(TX ports)to a > > oscilloscope and/or spectrum analyzer. Ideally, you would expect the > > protocol to simply transmit that data that it received at the RX ports, > > but the protocol fails to do that. I get an extremely weak signal on > > the spectrum analyzer and bad eye on the scope. I also tried feeding in > > a clock signal of 50MHz into BREF_CLK and testing the setup w/ 1Gbps > > PRBS signal, but that didnt work either. > > > > Can you tell me where I might be going wrong? > > > > Thanks, > > Billu > > > > For starters the Aurora core implements the Aurora Protocol so feeding > the receiver a raw PRBS pattern is seen as garbage as it doesn't match > the protocol. > > You didn't mention what other logic you wrapped around the core to > source and sink the Local Link TX/RX interfaces. If you left these > unconnected in your design, then it's likely that almost everything > has been trimmed. > > However you are getting something out of the TX pair so the bitstream > is doing something (probably constantly sending the initialization > portion of the protocol since it never gets a receiver lock). Since > you are getting a bad eye on the scope I would suggest checking your > scopes termination setting and set them to 50 ohm with AC coupling. > > You are using the ML321 and these boards are shipped with pre-compiled > BERT designs on the SystemACE CompactFlash card. Have you tried just > using these designs for your initial testing? > > Ed McGettigan > -- > Xilinx Inc.Article: 103424
pbdelete@spamnuke.ludd.luthdelete.se.invalid wrote: > 3xPHY + FPGA should be able to fix this. Otoh, maybe a pc can handle it. Maybe. See: http://www.ethereal.com/ -- Mike Treseler
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z