Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Brian McFarland wrote: > Well I gave up on trying to find free ( and legal :-/ ) info about PCI > online and ordered the mindshare PCI book. It hasn't arrived yet, but > I began just writing my own PCI module. I was kinda hoping to be able > to do this project w/o getting too deep into the specs of PCI, but I > don't think that's going to happen. I'd suggest hooking up the opencores PCI core if you have available hardware just to get a feel for what's involved. Once you get the gist of how it hangs together it's really quite simple to hook up something to the back end. The DMA controller shouldn't be that difficult either (although I realise I'm speaking with the benefit of hindsight). BTW I'd suggest you look into CDBG from probo.com when bringing up a PCI core. From there, you could invest a little time in benchmarking your application. Even if you end up deciding that the opencores PCI core is not the way to go, you've no doubt (a) learned something about PCI and (b) established a peformance testbench for your final solution. BTW the Mindshare book is certainly going to be a big help in ramping up on PCI. IMHO, you're going to need bus-mastering DMA to get 54MB/s out of PCI, and that's a *lot* of effort to do from scratch! Just verifying the design is going to be a mammoth effort - take a look at the size of the testbench module in the opencores PCI design to get an idea!!! Regards, -- Mark McDougall, Engineer Virtual Logic Pty Ltd, <http://www.vl.com.au> 21-25 King St, Rockdale, 2216 Ph: +612-9599-3255 Fax: +612-9599-3266Article: 105601
Hi everyone: I'm new to FPGA design (have always worked with ASIC before.) I'm working with Virtex 4, and from time to time, I'd get hold violations on one of my clocks, which XST cannot fix. This clock is driven with an IBUFG. I don't know how to work around this problem other than inserting delays with gates myself. I'd appreciate any feedback/idea on this problem. Thank you, -TTArticle: 105602
If you decide to use CDBG from probo.com to play with your PCI design, you should note that the version on the web only works with Win98 and earlier. I haven't released a Win2K++ version for free usage yet, but we do have one (and a Linux version as well) that are used internally. CDBG has two modes of operation: - one mode has the tradition peek/poke commands - one mode is a C interpreter that lets you write C code without screwing around with DPMI, etc. ALSO - I suspect it will be next to impossible to get 54MB/sec transfer rates without bus mastering. John Providenza Mark McDougall wrote: > Brian McFarland wrote: > > > Well I gave up on trying to find free ( and legal :-/ ) info about PCI > > online and ordered the mindshare PCI book. It hasn't arrived yet, but > > I began just writing my own PCI module. I was kinda hoping to be able > > to do this project w/o getting too deep into the specs of PCI, but I > > don't think that's going to happen. > > I'd suggest hooking up the opencores PCI core if you have available > hardware just to get a feel for what's involved. Once you get the gist > of how it hangs together it's really quite simple to hook up something > to the back end. The DMA controller shouldn't be that difficult either > (although I realise I'm speaking with the benefit of hindsight). > > BTW I'd suggest you look into CDBG from probo.com when bringing up a PCI > core. > > From there, you could invest a little time in benchmarking your > application. Even if you end up deciding that the opencores PCI core is > not the way to go, you've no doubt (a) learned something about PCI and > (b) established a peformance testbench for your final solution. > > BTW the Mindshare book is certainly going to be a big help in ramping up > on PCI. > > IMHO, you're going to need bus-mastering DMA to get 54MB/s out of PCI, > and that's a *lot* of effort to do from scratch! Just verifying the > design is going to be a mammoth effort - take a look at the size of the > testbench module in the opencores PCI design to get an idea!!! > > Regards, > > -- > Mark McDougall, Engineer > Virtual Logic Pty Ltd, <http://www.vl.com.au> > 21-25 King St, Rockdale, 2216 > Ph: +612-9599-3255 Fax: +612-9599-3266Article: 105603
"TT" <tramie_tran@yahoo.com> wrote in message news:1153961734.840434.288510@m79g2000cwm.googlegroups.com... > Hi everyone: > > I'm new to FPGA design (have always worked with ASIC before.) I'm > working with Virtex 4, and from time to time, I'd get hold violations > on one of my clocks, which XST cannot fix. This clock is driven with an > IBUFG. I don't know how to work around this problem other than > inserting delays with gates myself. I'd appreciate any feedback/idea on > this problem. > > Thank you, > -TT > The IBUFG is essentially the clock input pin buffer. The IBUFG is not intended to drive the fabric. The low-skew global clock net are driven by BUFGs (aka,BUFGMUX). So, typically you connect the IBUFG output to the BUFG input. I don't think you can get hold violations if the fabric is clocked by a BUFG. Another typical connection is if you use a DCM (for phase deskew, phase shifting, clock multiplication/division, etc...). Then, the IBUFG drives a DCM input, a DCM output will then drive a BUFG input, and the BUFG output will also be routed back to the DCM feedback input (and, of course drive the fabric clock inputs). BobArticle: 105604
"Bob" <nimby1_NEEDSPAM@earthlink.net> wrote in message news:G6Vxg.5844$157.5522@newsread3.news.pas.earthlink.net... > > "TT" <tramie_tran@yahoo.com> wrote in message > news:1153961734.840434.288510@m79g2000cwm.googlegroups.com... >> Hi everyone: >> >> I'm new to FPGA design (have always worked with ASIC before.) I'm >> working with Virtex 4, and from time to time, I'd get hold violations >> on one of my clocks, which XST cannot fix. This clock is driven with an >> IBUFG. I don't know how to work around this problem other than >> inserting delays with gates myself. I'd appreciate any feedback/idea on >> this problem. >> >> Thank you, >> -TT >> > > The IBUFG is essentially the clock input pin buffer. The IBUFG is not > intended to drive the fabric. The low-skew global clock net are driven by > BUFGs (aka,BUFGMUX). So, typically you connect the IBUFG output to the > BUFG input. I don't think you can get hold violations if the fabric is > clocked by a BUFG. > > Another typical connection is if you use a DCM (for phase deskew, phase > shifting, clock multiplication/division, etc...). Then, the IBUFG drives a > DCM input, a DCM output will then drive a BUFG input, and the BUFG output > will also be routed back to the DCM feedback input (and, of course drive > the fabric clock inputs). > > Bob > > Also, in V4 there are "local" clocking options. The pins labeled "CC" also have this thng called a BUFIO (local clock buffer). BUFIOs can drive IOB clock resources. BUFIOs can also drive BUFRs. BUFRs can drive (some of) the fabric (as do BUFGs). It helps to have your own local dedicated FAE. Otherwise, you need to RTFM. I hate reading the f'ing manuals, so I have our FAE's phone number memorized. He has to answer our questions or we'll think bad thoughts about him, and if you're a Twilight Zone fan then you'll understand the ramifications of this. BobArticle: 105605
johnp wrote: > If you decide to use CDBG from probo.com to play > with your PCI design, you should note that the version > on the web only works with Win98 and earlier. Yes, but if you're constantly re-configuring your FPGA w/PCI core, then you'll be re-booting constantly as well. The most time-efficient way I've found to do bring-up of a PCI core, or even back-end peripherals, is to have *DOS* booting off your HDD and run CDBG from there. > CDBG has two modes of operation: > - one mode has the tradition peek/poke commands > - one mode is a C interpreter that lets you write C code > without screwing around with DPMI, etc. Yes, the C interpreter is quite nice for 'scripting' tests. I brought up the opencores IDE controller with opencores DMA and opencores PCI using C code to do both PIO and DMA IDE accesses. The nice part was the fact that I could transcribe the C code almost line-for-line into Verilog for the equivalent HDL testbench routines. And for further re-usability, I ended up using the OCIDE core in a NIOS-based design (no PCI), which allowed me then to use the original CDBG C code almost unchanged as test routines for that project too! So thanks John, you've saved a lot of time for me at least! Regards, -- Mark McDougall, Engineer Virtual Logic Pty Ltd, <http://www.vl.com.au> 21-25 King St, Rockdale, 2216 Ph: +612-9599-3255 Fax: +612-9599-3266Article: 105606
Thanks to all who posted. I decided to chill out and just go passive on this. I have been butting heads a lot and have won every fight on the facts. But I am just too sick of this. So I am doing a timing analysis which will either end up specifying what the FPGA needs to meet (I will have to dig into the VHDL to understand the design) or I will just leave the relevant holes and explain to the superiors that this is the info I am not getting. I will need to dig into the design either way to make sure I understand which timing parameters really are an issue and which are not. Some of the input setup times *could* be an issue depending on whether they are latching them or not. For now I have analyzed the timing to the memory chip and believe it or not, there is a small bug! The timing spec on the DSP is not really written correctly for an async interface. All the timing is relative to the internal clock, so you have to add a couple of parameters to get the timing between the strobe and the setup/hold times on the data. Turns out the strobe can go away 2 ns before the clock edge that latches the read data. So the memory chip has to have a 2 ns hold time which it doesn't have. I don't think this will be a problem, but it does not meet formal timing analysis. Nico Coesel wrote: > Can't you shove the whole problem upwards? And see what is falling > down again? It seems to me you'll need to know how the FPGA's IOBs are > used in order to know what timing to expect. I originally tried this, but management is very weak. I went around for two weeks on an issue of a capacitor with the software guy. The project manager kept taking his side even after I proved him wrong. The software guy came back with bogus test data and it was only after I showed the manager that the data was not just two different value caps, but two different boards! So management is not just weak, but also poorly informed (read that as too stupid to know when they are being mislead). We have some good engineers and we have some "political" engineers who think they need to spend their time making themselves "look" good rather than "doing" good. I can't seem to figure out how to do my job correctly without having problems from some of these types. > However, based on your statement that the clock period is approx 10ns, > they must have used the flipflops inside the IOB. Each IOB has 6 > flipflops. 2 to control the output enable, 2 to control the output and > 2 to capture the input. The reason why there are 2 flipflops for each > function is to be able to use the IOB at double datarate (DDR). The > flipflops can be programmed so that one will act on the rising edges > and the other one will act on the falling edges. I can't say exactly what they did. The interface is not synchronous. The DSP specs the timing on the interface relative to the CLKOUT which is a low voltage clock at the CPU speed. The clock they are using is CLKMEM which is not used in the spec for the async memory interface. It is not the same speed or the same delay. So the control signals are in reality asynch to the clock. The FPGA treats them as asynch and goes through two registers for metastability reduction... at least that is what I am told. > The timing path from these flipflops to the actual pin is well > documented. However, you'll still need to know whether they switched > IOB_DELAY on or off for the input path. I could not find this documentation for the tristate path. The data sheet gives specs on timing for the path from the clock input pin to the output pin using the IOB FF that clocks the data, but no such similar spec exists from the clock input pin to the output pin going tristate when using the IOB tristate FF. I will check the design tomorrow and see if that is what the design uses. Even so, I will have to get the delay from the tools since it is not in the data sheet.Article: 105607
A possibly simple test might be to write a simple testbench model to model the DSP to write and read back some registers. During a write cycle keep the data bus unknown until the latest possible time that the spec says before the data will be valid and then also make it unknown at precisely the earliest time that the spec for the part says that the bus starts going tri-state or otherwise unknown. Do the same for the address bus relative to the read/write controls. Since the spec doesn't reference timing relative to the clock that the FPGA is using than have the model start and end cycles at various points relative to the clock cycle (say check every 1 or 2 ns throughout the entire 10 ns clock period). If all the registers read back correctly it doesn't say that all the timing works out but at least it's not totally out to lunch ('someone' would still need to do the timing analysis). But if anything does read back with unknowns than it says that there definitely is a problem and you should be able to punt it back to the FPGA guys and say "Here is a model of the bus that meets the spec but when connected to your design it fails". This all assumes that the bus model is a relatively simple one (since I'm not actually familiar with the TMS320VC5510 itself). KJArticle: 105608
Hello I’m working with the EDK and now I have got a question. At the moment I download the Hardware with the EDK and then I download and start the Software with the debuger. How can I load the Software and Hardware into the Flash, so that I can start run the system without the JTAG chain? With Impact I can only load the Hardware into the Flash and it works. There is an option in the EDK to load the software into the Flash but that doesn’t works fine and I still have to use XMD. I there a possibility to put the *.elf and *.bit file together? Now my second question: I build a sytem which uses uc/OS II and the Micrium TCP/IP stack and I have got a timing problem. I created an ISR for a module with a little arithmetic operation to measure the time it will take to solve it. It takes around 2us for 5 assembler instructions. Furthermore it takes around 20us to enter the interruptconrollerhandler after the hardware generated the interrupt. Could it be that this problem exits because I use the debugger to run the Software? I’m using the ML403 evalboard and the Microblaze(100MHz). I hope somebody can help me. Cu OlliArticle: 105609
Olli wrote: > How can I load the Software and Hardware into the Flash, > so that I can start run the system without the JTAG chain? Usually there is an option to create a PROM file instead of a bitstream file. In Xilinx Project Navigator's "Process View", underneath the "Generate Programming File" process is the selection "Generate PROM, ACE, or JTAG File," which I assume is for creating a file to load the serial PROM that contains the design that runs at power on. You may need to change some jumpers in order to program the serial PROM also. RonArticle: 105610
Hi Where do I find the Project Navigator? I'm only working with the EDK and I use Impact to generate an *.EXO file from the EDK *.bit file and I can load my hardware from the flash. My problem is ,that the *.bit file from the EDK only comprised the hardware and not the software. I don’t now how I can create a file which contains the hardware and the software ( my program and a bootloader, which runs my software in the ddr-ram). OlliArticle: 105611
Hello, I have a design with two FPGAs (Xilinx Spartan3). Both use common clock, and both can send data to the other one in a synchronous manner. Because of possible clock skew, the critical seems to be meeting input hold time requirements (setup is not a problem). This can be solved by adding additional delay on the data path, and I wanted to use IOBDELAY element for this purpose. But I'm not sure how to calculate the hold time of the input flip-flop when the IOBDELAY is added and a DCM is used. The datasheet specifies only TPHDCM (IOBDELAY=NONE, DCM used) or TPHFD (IOBDELAY=IFD, DCM not used). There is also TIOICKPD parameter (hold time at the IFF in respect to clock on this flip-flop, and not on the global clock pin), but then I'm not sure how to calculate the skew between IFF clock and clock on the input pin (the DCM is used). Any ideas how to approach this problem? -- Regards RobertP.Article: 105612
Once your design is placed & routed, you can get a timing report that gives you the setup and hold requirements of each input to the clock pin(s). You can then adjust the clock timing if necessary using the fixed or variable delay of the DCM (Fixed is easier if you can find a value that works). By the way, if you make the timing adjustment with the DCM, you may find it easier to remove the IOBDELAY as it will give you a larger setup / hold window to center the clock in. Also note that the clock to output timing when measured from the clock input pin and not the internal global net can be quite long, so meeting hold time may not be as much problem as you might think from reading the IOB timing numbers in the data sheet. HTH, Gabor RobertP. wrote: > Hello, > I have a design with two FPGAs (Xilinx Spartan3). Both use common clock, > and both can send data to the other one in a synchronous manner. Because > of possible clock skew, the critical seems to be meeting input hold time > requirements (setup is not a problem). This can be solved by adding > additional delay on the data path, and I wanted to use IOBDELAY element > for this purpose. > But I'm not sure how to calculate the hold time of the input flip-flop > when the IOBDELAY is added and a DCM is used. The datasheet specifies > only TPHDCM (IOBDELAY=NONE, DCM used) or TPHFD (IOBDELAY=IFD, DCM not used). > There is also TIOICKPD parameter (hold time at the IFF in respect to > clock on this flip-flop, and not on the global clock pin), but then I'm > not sure how to calculate the skew between IFF clock and clock on the > input pin (the DCM is used). > Any ideas how to approach this problem? > > -- > Regards > RobertP.Article: 105613
Gabor wrote: > Also note that the clock to output timing when measured > from the clock input pin and not the internal global net can be quite > long, so meeting hold time may not be as much problem as you > might think from reading the IOB timing numbers in the data sheet. > Why clock to output would be longer than specified in the datasheet (TICKOFDCM - pin-to-pin clock to output)? Also there is no Min TICKOFDCM specified - I found some info about how to estimate it (25% of worst case, but some people think it is not conservative enough). -- Regards RobertP.Article: 105614
rickman wrote: <snip> >> The timing path from these flipflops to the actual pin is well >> documented. However, you'll still need to know whether they switched >> IOB_DELAY on or off for the input path. > > I could not find this documentation for the tristate path. The data > sheet gives specs on timing for the path from the clock input pin to > the output pin using the IOB FF that clocks the data, but no such > similar spec exists from the clock input pin to the output pin going > tristate when using the IOB tristate FF. I will check the design > tomorrow and see if that is what the design uses. Even so, I will have > to get the delay from the tools since it is not in the data sheet. Rather than delving deep into the design, perhaps Timing Analyzer can supply a lot of information. If the I/O are unconstrained, the Analyze/Against Auto Generated Design Constraints will give you the input and output timing. These can show the full combinatorial paths (or lack thereof) between the pads and registers. The Data Sheet section of the report will also give you Tcko, Tsu, and Th for the design. You may even find the IOBDELAY values here rather than having to go to the pad report.Article: 105615
GaLaKtIkUs™ wrote: > Hi all, > I would like to know before upgrade if EDK8.1i is compatible with > ISE8.2i. > > Thanks in advance > Just found this out yesterday: EDK8.1i won't even run if ISE8.1 isn't installed. A co-worker with little free disk space uninstalled his ISE8.1i and installed ISE8.2i. Yesterday, he wanted to run XMD and found that EDK8.1i won't start up because it can't find ISE8.1i. It's possible to have several versions of EDK and ISE installed (if you have the room). --- Joe Samson Pixel VelocityArticle: 105616
"Olli" <Emperor_@gmx.de> wrote in message news:ee9d396.-1@webx.sUN8CHnE... > Hello I'm working with the EDK and now I have got a question. At the moment I download the Hardware > with the EDK and then I download and start the Software with the debuger. How can I load the Software > and Hardware into the Flash, so that I can start run the system without the JTAG chain? With Impact I can > only load the Hardware into the Flash and it works. There is an option in the EDK to load the software into > the Flash but that doesn't works fine and I still have to use XMD. I there a possibility to put the *.elf and *.bit > file together? There is more than one FLASH memory on ML403. Read some documentation. Basically, you need to create a bootloader application, which can be done with the FLASH programming tool in EDK while writing your main *.elf file into the Linear FLASH. Then you need to update your bitstream with the bootloader elf (this by default will create download.bit file) and then program the Platform FLASH using Impact. /MikhailArticle: 105617
Hi all, I was wondering if anyone had succeded in saving time by using guided MAP/PAR. I personally find that every time I want to use it, even in the most obvious cases when 99.9% of design hasn't changed, I then have to re-run everything from scratch anyway... Thanks, /MikhailArticle: 105618
>>More or less. Wait a tick. Look at the data sheet of the 74xx297, it >>says the lock range (pull range) of the PLL is >>delta_f_max = fc * M / (2*K*N ) >>using your values >>delta_f_max = 50 Hz * 600000 / (2* 300000 * 40) = 1.25 Hz >>Hmm, this should be enought. I/D counter is a 2 divider. In locked condition, its output is 30MHz/2 = 15MHz. Therefore, in addition to N=40 divider, I have to use an f_out prescalar (=7500). In this case : delta_f_max = fc * M / (2*K*N*f_out_prescalar ) delta_f_max = 50 Hz * 600000 / (2* 300000 * 40 * 7500) = 1.25/7500 Hz Is this correct? Isa Falk Brunner wrote: > raso schrieb: > > Hi Jon, > > > > Falk Brunner sent me his VHDL implementation of 74LS297. I modified it > > so that K-counter and I/D counter (DCO) run at system clock which is > > 30MHz. > > Wasn't it already this way? > > > The system has following blocks; > > > > - The first component is an JKFF based phase detector. > > - Then, there is a K-counter operating at 30MHz. With 30MHz clock, a > > full 50Hz > > period means M=30e6/50=600000 ticks. For minimum jitter, modulus of K > > counter is set M/2 which is 300000. The borrow pulse decreases the > > modulus of I/D counter (2KHz DCO) while carry pulse increases it by 1. > > So in locked condition, it generates 1 carry and 1 borrow pulse within > > one 50Hz period and they cancel each other. > > - I/D counter is a DCO (modulus counter). It operates at 30MHz. When > > it is locked to 50Hz it has modulus of 15000. When 50Hz changes, the > > borrow and carry pulses should adjust the modulus of I/D counter (carry > > pulse increases the modulus by 1, and borrow decreases it by 1). By > > this way, period of 2kHz pulses is adjusted according to 50Hz input. > > - N-Counter which devides 2Khz clock by 40. > > > > The only parameter that I can play with is the modulus of K counter. > > More or less. Wait a tick. Look at the data sheet of the 74xx297, it > says the lock range (pull range) of the PLL is > > delta_f_max = fc * M / (2*K*N ) > > using your values > > delta_f_max = 50 Hz * 600000 / (2* 300000 * 40) = 1.25 Hz > > Hmm, this should be enought. > > > I can't use direct implementation of 74LS297. Because, it syncronises > > f_in and f_out by inserting or deleting 2KHz pulses (I/D pulses). I > > This is not true. It inserts master clock cycles, in your case, 30 MHz > clocks. Which are 1/3000 of a 2 kHz period. > > > can't tolerate inserting or deleting 2KHz pulses. It has to keep the > > number of 2kHz pulses as 40 in each 50Hz period. > > Sure, otherwise you couldnt call it PLL ;-) > > Regards > FalkArticle: 105619
Weng Tianxiang wrote: > Hi Brannon, > Can you explain further about a full huffman analysis for JPEG? > > I have no any knowledge about JPEG and what books you recommend about > the topics? Google this: "MIL-STD-188-198A" and Google this: "itu-t81.pdf" Those are two fundamental JPEG documents. There is one nice book on the topic I've seen, but I don't have a copy handy. If you can understand the above two documents, you won't need it. It has a pink cover. And if you don't know what Huffman encoding is, you had better Google that one as well.Article: 105620
> I was wondering if anyone had succeded in saving time by using guided > MAP/PAR. I personally find that every time I want to use it, even in the > most obvious cases when 99.9% of design hasn't changed, I then have to > re-run everything from scratch anyway... 80% of the times I ran it, it crashed. The other 20% of the time it took longer than running without it. That includes all versions from 6.1.1-8.1.1 with a range of test cases. I personally think the thing is a joke. It's a great idea, nice sales pitch, and terrible implementation. I submitted some bugs that were supposedly to be fixed in 8.2, but I haven't tried it yet. Apparently it doesn't play nice with overly tight constraints or duplicated registers. Here's a few thoughts on what I would have expected with "exact" mode and never saw: If the incoming code 1. has less (or equal) logic than that of the previous run 2. and the logic it has matches that of the previous run 3. and the previous run passed timespecs then that compile should take no time at all. If the incoming code 1. has all the same logic as the previous run 2. in addition to some more logic then the compile time should take the same amount of time as the difference of the logic on a smaller chip representing the amount of available logic. If the net names don't match exactly but the logic names do, well good freak, make some assumptions about the net names. That issue turns the whole thing into a major headache.Article: 105621
Hi everyone, (especially those Xilinx chaps) :-) I've been having an interesting debate with a colleague here, regarding Virtex 4 Rocket IO (and Virtex II for that matter). The challenge is to make a really high speed signal sampler in the fabric of one of these FPGAs by using the Rocket IO in a custom manner. I'm talking some GS/s We figure using a local clock of 100M, should be mutiplied by 20 inside the rocket IO, giving 20 bits per 100M period that can be shuffled to get some indication of the input waveform. i.e. a 2G sampler. Ok, ignoring the hugely important fact that FPGA has to be able to process this, and that the PCB has to be well designed, and that the input signal might have some new frequency and electrical constraints, are there any pitfalls we've missed? btw: the idea comes from an expansion Figure-7 of: http://www.eetkorea.com/ARTICLES/2004JUN/2004JUN22_PLD_RFD_AN05.PDF Are there any potential flaws in these ideas anyone can see? Thanks in advance, BenArticle: 105622
Ok, So I think I can get past that part for now. Now, I'm trying to recreate the tutorial. I notice is Base System Builder it doesnt list the LCD screen as a periphial. So I am trying to add it manually. I have the pcore and the drivers, I can add it as a periphial and set it up on the bus sucessfully, but I am having issues with the drivers. I copied them to the include folder and the folder that my test code is in, and I cannot get it to comiple correctly. This is possibly my lack of knowledge in the C language? I'm starting to get the hang of all this. I'm fairly fresh outta school (getting a little ripe) but I don't work with anyone that's had more experience than me in this, so I'm finding this a bit frustraiting, expecially when I have a project I am to be working on! I'm also messing around with a bootloaded design, and downloading to the flash and external ram. From reading over the tutorial it looks like XPS/EDK builds a bootloader for you...I suppose a basic bootloader. Well I'll see what I can come up with. Thanks for your support!Article: 105623
Thanks Bob for the info. Bob wrote: > "Bob" <nimby1_NEEDSPAM@earthlink.net> wrote in message > news:G6Vxg.5844$157.5522@newsread3.news.pas.earthlink.net... > > > > "TT" <tramie_tran@yahoo.com> wrote in message > > news:1153961734.840434.288510@m79g2000cwm.googlegroups.com... > >> Hi everyone: > >> > >> I'm new to FPGA design (have always worked with ASIC before.) I'm > >> working with Virtex 4, and from time to time, I'd get hold violations > >> on one of my clocks, which XST cannot fix. This clock is driven with an > >> IBUFG. I don't know how to work around this problem other than > >> inserting delays with gates myself. I'd appreciate any feedback/idea on > >> this problem. > >> > >> Thank you, > >> -TT > >> > > > > The IBUFG is essentially the clock input pin buffer. The IBUFG is not > > intended to drive the fabric. The low-skew global clock net are driven by > > BUFGs (aka,BUFGMUX). So, typically you connect the IBUFG output to the > > BUFG input. I don't think you can get hold violations if the fabric is > > clocked by a BUFG. > > > > Another typical connection is if you use a DCM (for phase deskew, phase > > shifting, clock multiplication/division, etc...). Then, the IBUFG drives a > > DCM input, a DCM output will then drive a BUFG input, and the BUFG output > > will also be routed back to the DCM feedback input (and, of course drive > > the fabric clock inputs). > > > > Bob > > > > > > Also, in V4 there are "local" clocking options. The pins labeled "CC" also > have this thng called a BUFIO (local clock buffer). BUFIOs can drive IOB > clock resources. BUFIOs can also drive BUFRs. BUFRs can drive (some of) the > fabric (as do BUFGs). > > It helps to have your own local dedicated FAE. Otherwise, you need to RTFM. > I hate reading the f'ing manuals, so I have our FAE's phone number > memorized. He has to answer our questions or we'll think bad thoughts about > him, and if you're a Twilight Zone fan then you'll understand the > ramifications of this. > > BobArticle: 105624
"Brannon" <brannonking@yahoo.com> wrote in message news:1154014545.550301.275490@m73g2000cwd.googlegroups.com... > > 80% of the times I ran it, it crashed. The other 20% of the time it > took longer than running without it. That's exactly what I am experiencing as well!!! /Mikhail
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z