Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
John Adair wrote: > Some FPGA's like the Xilinx Virtex-4 (FX - PowerPC) come with a > hard(always there) processor within the programmable fabric. There are > also soft core microprocessors that use the general FPGA fabric for > their implementation like MicroBlaze. > > In both cases the microprocessor cores generally can do what their > standalone equivalents can do. The difference comes in that unused FPGA > fabric can be used to implement the any digitial function or interface > you like within the bounds of the device size and speed. Some > interfaces will need external buffering depending on the voltage levels > e.g. a transceiver for CAN bus. > > The one weak area is replacing a microcontroller that has analogue > interfaces. Currently there is virtually no analogue support in FPGA's > so any analogue functions like A to D tend to need add on devices. Another weak area, wrt microcontrollers, is lack of on-chip FLASH code memory. Some small FPGA cores/tasks can run out of BRAMs, and those get quite closely equivalent to the smallest microcontroller CORES. Then there are common uC peripherals (besides ADCs) like low power oscillators, calibrated RC oscillators, WDOGs, brown out detectors, DACs, and 5V compatible pins.... All this means you could find a FPGA used alongside a Microcontroller, with each doing what they are best at. -jgArticle: 108376
The reason I'm doing this is implied in my third paragraph: "Now, at least all the red is gone from Post-Map simulation and some of the bytes are right in my first section of output." In other words I'm doing this because it wasn't working, according to the simulator, and this improved matters significantly but not completely. The "red" I refered to was used by my simulator, ModelSim III XE 6.1e starter edition, to indicate unknown output values, I think, since X's for unknowns appeared post-map. They no longer appeared after I did the above and then some but not all of my output was actually correct. In my first sentence I wrote "I'm trying to get post-map simulating correctly now that I have post-translate simulating correctly." In other words I'm trying to get the post-map design correct as shown by the simulator, because it isn't. I got the behavioural simulation showing correct logic and I got the post-translate simulation to show correct operation by clocking my ROMs so they wouldn't be optimized away. Do you have any ideas to prevent "redundant logic" from being removed? I've been told the key is in using different VHDL coding style. I'm also going to look into putting "Save" on all my nets. This is a constraint, according to dev.pdf, the Development System Reference Guide, so I will look it up in the Constraints Guide, cgd.pdf. "Keep Hierarchy" might help. I suspect this is the problem because post-translate I was having my ROMs inferred as RAMs and optimized away, and, as I wrote in my third paragraph, "My removed "redundant logic" list was made a little smaller," and operation was significantly improved, as indicated in that paragraph. I figure that restoring more removed logic might do the trick. It certainly looks like the thing to try. Does anyone have some samples of VHDL code before and after that were interpreted as redundant and then weren't after being changed? Best regards, -JamesArticle: 108377
Sorry for being vague. I am using XPS and ISE . I figured out which paths were causing negative slacks on the signals using the timing analyzer. Thanks a lot for the reply. I really appreciate it. -D KJ wrote: > <dhruvakshad@gmail.com> wrote in message > news:1157728453.205266.321290@d34g2000cwd.googlegroups.com... > >I am using virtes II pro pci card .The card has a pci bridge with a bus > > linking processor in the bridge to the fpga. The constraints on the > > processor bus signals were givent by the vendor. My design gives me a > > negative slack on 2 of the signals on the processor bus. Will this > > affect the design ? I am doing dma from the fpga to the host RAM along > > the bus. The transfer rate is low. > > I was googling for links to read on negative slacks but could not find > > good ones for a starter. > > Not knowing what tool you're using that is reporting the slack makes it > impossible to be certain but the basic defintion of 'slack is roughly.... > > 'Slack' is the amount of time you have that is measured from when an event > 'actually happens' and when it 'must happen' .. The term 'actually happens' > can also be taken as being a predicted time for when the event will > 'actually happen'. > > When something 'must happen' can also be called a 'deadline' so another > defintion of slack would be the time from when something 'actually happens' > (call this Tact) until the deadline (call this Tdead). > > Slack = Tdead - Tact. > > Negative slack implies that the 'actually happen' time is later than the > 'deadlin' time...in other words it's too late and a timing violation....you > have a timing problem that needs some attention. > > KJArticle: 108378
KJ wrote: >... > is necessary to go from interface #10 to interface #11. Presumably > this is simply the OpenCores DDR controller or some other commercial > controller. >... I'm looking at the opencores DDR controller for reference and educational purposes, that's what it appears most suited for. I'm all new to VHDL and FPGA design mind you, but one has to start somewhere. I did a fair amount of 7400 series logic design about 1980-1982 but things are a bit more intricate now. > Good question. I can't really give details, but I'll say that I've > implemented the approach that I mentioned for interfacing six masters > to DDR and the logic resources consumed were less than but roughly > comparable to that consumed by a single DDR controller. I had all the > same issues that you're aware of regarding how you need to properly > control DDR to get good performance and all. This is what I consider the most important paragraph of your response, and based on this I'll probably abandon the multiple DDR aware master idea. My understanding of wishbone is certainly incomplete, I had forgotten it was a master/slave system for connecting 2 endpoints. What I had been thinking of was sort of like one of the xilinx busses (opb?) where they just wire-OR control signals together, and all inactive bus drivers are supposed to drive their signals to logic 0 when they don't own the bus. This boils down to each of the 4 "masters" just has some representation of the DDR's pins, plus a mechanism to request the bus. Until the bus has been granted, each master shuts up. Once bus is granted, the owning master can then diddle the lines and it's like that single master is controlling the DDR itself. Now in retrospect it occurs to me the main benefit of something like that would be in minimizing latency, but only in the case where the DDR is mostly inactive. If it's frequently being used, then each master must wait for its turn anyway and latency is out the window. >>Complications: >>1) To support bursting, it needs to have some sort of fifo. An easy way >>would be the core stores up the whole burst, then transacts it to the >>DDR when all is known. > > I'd suggest keeping along that train of thought as you go forward but > keep refining it. I'm starting to like this approach. Each master could then just queue up an access, say WRITE = ADDRESS + Some number of 32 bit words of data to put there READ = ADDRESS + the number of words you want from there. In either case data gets shoved into a fifo owned by the master. Once the transaction is queued up, the master just needs to wait until it's done. Let's see what the masters are: 1) CPU doing cache line fills + flushes, no single beat reads/writes 2) Batches of audio data for read 3) Video data for read 4) Perhaps DMA channels initiated by the CPU, transfer from BRAM to memory, say for ethernet packets. 2,3,4 latency isn't an issue. #1 latency can be minimized if the cpu uses BRAM as a cache, which is the intent. Thanks for taking the time to write all that! Dave -- David Ashley http://www.xdr.com/dash Embedded linux, device drivers, system architectureArticle: 108379
Jonathan Bromley wrote: > On Thu, 07 Sep 2006 03:19:32 GMT, "KJ" > <kkjennings@sbcglobal.net> wrote: > >> I've long since found that more extensive simulation is many times much more >> productive in weeding out the bugs than trying to debug on actual hardware. > [...] > > Many, many thanks for a delightfully clear exposition of a vital idea. > > Just occasionally, the orders-of-magnitude faster throughput of tests > that you win from a hardware prototype outweighs the pathetically bad > observability and controllability of the physical hardware as compared > with a simulation. Only occasionally, though. And to prove your point: video processing. Simulating a single frame may take only 20-30 but isolated frames are meaningless while testing a hardware MAADI implementation where it is the overall impression over a long sequence that really matters. Running the same test in hardware may cost an hour in synthesis but it then allows viewing the whole video sequence in real-time instead of having to wait a few days after the simulation. Simulation is good for finding functionnal flaws and debugging. Hardware validation is best used for real-world performance evaluation. Throughput is one good argument for hardware testing. For HD video, it is pretty much the only sane option for both throughput and general practicality: testing against a large variety of input sources with various parameters would tie up simulation host/licenses for days and gobble up TBs for storing simulation data. With hardware validation as a backup, simulations can be more focused on small, highly feature-specific tests while hardware runs help identify missing test cases and problematic areas based on real-world results. Being able to preview a module's performance in real-time and being satisfied with its real-world results beforehand is a major relief when the target device is an ASIC. -- Daniel Sauvageau moc.xortam@egavuasd Matrox Graphics Inc. 1155 St-Regis, Dorval, Qc, Canada 514-822-6000Article: 108380
Jecel wrote: > rickman wrote: > > [sample code with 8 bit instructions] > > Any interest in comparing this to one of the proprietary cores such as > > microBlaze or NIOS2? I have wanted to see how efficient they would be > > implementing a Forth, but I have yet to work with one of them yet. > > This is an interesting instruction set and seems to make good use of > the two stacks. > > My impression is that neither Microblaze nor NIOS II would do a > particularly good job of executing Forth since their memory > instructions seem to be of the simple register+short immediate offset > type. Contrast that with the ARM (or any CISC) which can use > pre-decrement and post-increment (or pre-increment and post-decrement) > to move things to and from the stack. One of the nice things about an FPGA processor core is that they are typically expandable. It should be a simple matter to add hardware mapped stacks to any of these cores. It could be memory mapped, or even mapped to registers. I think there are three operations on the stack, read, write, push and pop. So unless there are some addressing modes that could be usurped, it might require mapping each stack to two registers, one for read/write and one for pop/push. But then I am not sure the best way to implement forth is to simulate the hardware of a stack machine. Modern Forth compilers seem to do a durn good job of mapping the forth functionality to RISC/CISC processors. I guess the question is whether I can make a MISC processor that works as well as a RISC/CISC processor and uses fewer hardware resources. > The native code for this particular task, however, would be reasonably > small and fast for nearly any RISC including the two proprietary cores. > Just two loads, two stores, an increment, a comparison and a branch > (the last two are often a single instruction). Thanks for pointing that out. The original FPGA that I designed my Forth core for did not have a vendor supplied RISC core. But all of the newer ones do. To compare to my MISC code the RISC would need to load the two addresses and a count which would be three more instructions and I'm not sure how many bytes, at least 6 more for a total of about 20 bytes. So my MISC 19 bytes/13 loop clock cycles is at least as good if not better than the RISC 20 or more bytes if slower in clock cycles.Article: 108381
Are there any approaches or constraints available to fix hold-time violations in the Xilinx ISE tool aimed at Virtex-2 devices? The P&R tool is supposed to automatically fix these, but if the timing results from P&R show hold violations, what is the recommended approach to eliminating them? In the ASIC world one would insert buffers in the failing path and do an incremental P&R compile - does this apply for Xilinx FPGAs as well?Article: 108382
Hi, Are there anyone here that is familiar with signal integrity concepts? I'm running a trace (TDI for configuration) under a high-speed signal(horizontal) (66Mhz) vertically. Will this signal (TDI) impose a signal integrity problem? PS: i have other fpga configuration signals running horizontally under a vertical high speed trace? Thanks.Article: 108383
On 2006-09-01, Spehro Pefhany <speffSNIP@interlogDOTyou.knowwhat> wrote: > On Fri, 1 Sep 2006 20:36:17 +0100, the renowned John Woodgate ><jmw@jmwa.demon.co.uk> wrote: > >>In message <hsogf2tfqmq5dvchfq7qnlj6m6kcmuvs3m@4ax.com>, dated Fri, 1 >>Sep 2006, Spehro Pefhany <speffSNIP@interlogDOTyou.knowwhat> writes >> >>>You had him executed by the Spanish Inquisition, didn't you, Mr. WOOD >>>of GATE? >> >>Not me, it was one of the six other 'John Woodgate's in UK. In any case, >>the result was unexpected. > > That the police actually *did* something? Yes, that would be > unexpected. "Nobody expects the Spannish Inquisition" (google for that phrase if it means nothing to you) :) Bye. JasenArticle: 108384
HI, I experience configuration problems with XC4010E parts. I configure them in slave serial mode : M0,M1,M2 = Z. Configuration pins (M0-2) do not polarize to VCC as they should because of their internal pull-ups, so parts enter configuration mode incompatible with my design. If I put external pull-ups, those pins polarize correctly and the configuration process seems good. Though, in this case, the operation of the devices is wrong or unpredictable. My design does work correctly with 100s of parts with external pull-ups or not. I just encountered this behavior with all parts of two shipments. Did any one get similar behavior ? Any help or infomation welcome. JAGArticle: 108385
fpga_toys@yahoo.com skrev: > jjlindula@hotmail.com wrote: > > Where I work, the manager brings you into their office, starts a series > > of short questions concerning your family and other things not relating > > to your job and then finally gives you a pat on the back and says, good > > job. Not much is really discussed and therefore not really useful. > > Stresses outside of work, affect employee productivity big time. I currently work offshore, on a survey ship. Parts of the crew works two weeks on and two weeks off, other parts of the crew works four weeks on and four weeks off. In this sort of situation, everything is exaggerated by a factor 50 or more. Start with the food and housing. If any of these are poor, the performance of the whole ship is affected. People who complain about the food spend energy on trivialities, that drain from their work capacity. People who don't sleep well, get tired and worn out very quickly. Next, the contact with friends and family. A 10 minute phone call every couple of days may be OK to stay in touch, but not to sort out domestic problems. What was that $1000 bill that dropped in the mail two days ago really all about? Why was the 5-year-old sick in kindergarden today? What are my chances with that girl I met at the part last weekend? These are more or less trivial questions in everyday life; they can become enormous at the best of times when one is prevented from following up quickly and personally. Then there is the weather. When at sea, you have to work on the grace of mother nature. If the weather is too bad to do any work, there is nothing you can do about it. Maybe you even have to seek more sheltered waters. Different people have different tolerance to sea sickness. You are expected to do your job as long as there is working conditions what the VESSEL is concerned. If your tolerance is lower, then you have a problem. So you see, there may be lots of relevant "non-work-related factors" to consider. True, at sea the conditions may be more extreme, but the general principle and line of thought applies everywhere. RuneArticle: 108386
yy wrote: > Hi, > Are there anyone here that is familiar with signal integrity concepts? > I'm running a trace (TDI for configuration) under a high-speed > signal(horizontal) (66Mhz) vertically. Will this signal (TDI) impose a > signal integrity problem? > > PS: i have other fpga configuration signals running horizontally under > a vertical high speed trace? If the two traces run parallel for any distance on adjacent layers, you can expect to see two forms of coupling, inductive and capacitive. The capacitive coupling effects will be much stronger so for adjacent layers you can ignore inductive coupling. However, this is a very easy problem to prevent, just don't run adjacent layers in the same direction! Run traces on layer 1 horizontal and on layer 2 vertical. Then you only have to worry about traces running parallel on the same layer which is a whole different problem and is much easier to deal with. Traces that run parallel on the same layer have very little capacitive coupling. The inductive coupling is such that if you maintain close spacing to a ground or power plane, you will not see significant effects. So crosstalk is easily dealt with by applying two simple rules; 1) run signals on adjacent layers in orthogonal directions and 2) maintain a very close spacing between signal layers and ground/power planes (5 mil is good). Adding a ground trace between the agressor and victim trace has no effect that does not come from the extra spacing between the two. If you have a concern, just increase the spacing between the signals. Traces that merely cross do not couple enough to cause a problem in most circuits.Article: 108387
"zohair" <szohair@gmail.com> wrote in message news:ee9e9f6.-1@webx.sUN8CHnE... > Are there any approaches or constraints available to fix hold-time > violations in the Xilinx ISE tool aimed at Virtex-2 devices? The P&R tool > is supposed to automatically fix these, but if the timing results from P&R > show hold violations, what is the recommended approach to eliminating > them? In the ASIC world one would insert buffers in the failing path and > do an incremental P&R compile - does this apply for Xilinx FPGAs as well? Hold violations in the FPGA world generally are because the clock signal is a gated clock or generated internally in some fashion that can not be distributed with little net delay. Generally speaking this is not good design practice (at least no inside an FPGA) so the approaches are to - Change constraints on the clock so that it is distributed with low delay (i.e. use some global clock route network). - Change the source code to generate the clock appropriately (i.e. in a way that can be distributed via low delay). KJArticle: 108388
<james7uw@yahoo.ca> wrote in message news:1157845013.796884.131330@b28g2000cwb.googlegroups.com... > The reason I'm doing this is implied in my third paragraph: > "Now, at least all the red is gone from Post-Map simulation > and some of the bytes are right in my first section of output." > In other words I'm doing this because it wasn't working, And again, I'll point out that this going about the task of 'getting it working' by trying to disable synthesis optomizations is the wrong approach and will likely be wasted effort. > according to the simulator, and > this improved matters significantly but not completely. The > "red" I refered to was used by my simulator, ModelSim III XE 6.1e > starter edition, to indicate unknown output values, I think, since > X's for unknowns appeared post-map. They no longer appeared > after I did the above and then some but not all of my output was > actually correct. But that means nothing until you understand why the optomized result was 'incorrect'. Remember that optomization does not involve changing the overall function, just the implementation of that function. I have no doubt that if you were to somehow disable every possible optomization that it might be possible to have it emulate the code that you originally wrote...but I'll content that it still won't work for you in a real device either. > > In my first sentence I wrote "I'm trying to get post-map simulating > correctly now that I have post-translate simulating correctly." > In other words I'm trying to get the post-map design correct as > shown by the simulator, because it isn't. While it's possible (but not terribly probable) that there is a bug in the synthesis tool the more likely explanation is in the source code that you wrote. Synthesis to actual FPGA/CPLD/ASIC produces an output model that is strictly std_logic/std_ulogic based...there are no 'enumerated types', 'integers', etc. Those output models also model expected propogation delays that will exist in the actual device. That being the case, here are the things to look for and how to go about looking for them. - Peruse the synthesis report for warnings. If it runs across code that is valid but is not well synthesized there will usually be a warning (the classic example being the latch, signal 'initialization' values being another one). Comb through those warnings and fix them. - Peruse the timing report for timing conditions. Timing analysis produces five basic numbers: setup time (for each input relative to the clock that samples it), hold time (for each input relative to the clock that samples it), clock to output delay, propogation delay (for pure combinatorial input to output paths) and clock frequency. Now go back to the code for your testbench and make sure that you are - Not violating setup or hold time. - Not violating clock frequency - Not blaming the post route model when you look at outputs going to 'X' at a time that is still within the clock to output delay or propogation delay. - Peruse your source code for ANY usage of a data type other than std_logic or std_ulogic. Enumerated types and integers are not illegal and can easily be synthesized but they are susceptible to misuse. The misuse comes about because in the simulation environment signals/variable of those types will get 'magically' initialized...there is typically no such magic in a real device. You can write code for a counter using type 'natural' that will simulate just fine but when that 'natural' is translated into 'std_logic' as it must be to be synthesized the output model will not 'work' and will sit there as an unknown value because the original code had nothing to reset it to a known value. Those are the tools you need to debug your problem. Disabling optomizations is not in that list and will only lead you down a path that will result in your final design not working anyway. KJArticle: 108389
"Daniel S." <digitalmastrmind_no_spam@hotmail.com> wrote in message news:sFKMg.63210$g75.868637@weber.videotron.net... > Jonathan Bromley wrote: >> On Thu, 07 Sep 2006 03:19:32 GMT, "KJ" <kkjennings@sbcglobal.net> wrote: >> >>> I've long since found that more extensive simulation is many times much >>> more productive in weeding out the bugs than trying to debug on actual >>> hardware. >> [...] >> >> Many, many thanks for a delightfully clear exposition of a vital idea. >> >> Just occasionally, the orders-of-magnitude faster throughput of tests >> that you win from a hardware prototype outweighs the pathetically bad >> observability and controllability of the physical hardware as compared >> with a simulation. Only occasionally, though. > <snip> > Simulation is good for finding functionnal flaws and debugging. Hardware > validation is best used for real-world performance evaluation. That's my point...finding flaws and debugging....much quicker in simulation than on hardware. > > Throughput is one good argument for hardware testing. For HD video, it is > pretty much the only sane option for both throughput and general > practicality: testing against a large variety of input sources with > various parameters would tie up simulation host/licenses for days and > gobble up TBs for storing simulation data. Not really a good argument. What you should be focusing on with the hardware testing is finding test cases to see whether they 'work' or not (i.e. the various parameters for HD video in your example). If you've found all the functional bugs from simulation you'll find no failures during hardware testing either. That of course is a very big 'if'. What generally happens is that you do run across some test case(s) that fails on the hardware at which point you either... - Slap yourself on the head because it is a simple thing that you just forgot about. Fix and retest. - Characterize what it is about this test case that is different from the test cases that have already been simulated and add on to the simulation testbench to cover that case so you can then use the simulation to debug and then fix the problem. In any case, you're using the hardware not so much for debugging as it is for uncovering conditions that you didn't adequately test (and are now failing). My contention is that for the typical, experienced designer you'll have fewer of those failing cases in the first place when the simulation is done up front. > > With hardware validation as a backup, simulations can be more focused on > small, highly feature-specific tests while hardware runs help identify > missing test cases and problematic areas based on real-world results. Agree completely. Hardware is used to uncover the test cases that you thought you didn't need when you were doing the design and simulation. KJArticle: 108390
KJ wrote: > <james7uw@yahoo.ca> wrote in message > news:1157845013.796884.131330@b28g2000cwb.googlegroups.com... > > The reason I'm doing this is implied in my third paragraph: > > "Now, at least all the red is gone from Post-Map simulation > > and some of the bytes are right in my first section of output." > > In other words I'm doing this because it wasn't working, > > And again, I'll point out that this going about the task of 'getting it > working' by trying to disable synthesis optomizations is the wrong approach > and will likely be wasted effort. > > > according to the simulator, and > > this improved matters significantly but not completely. The > > "red" I refered to was used by my simulator, ModelSim III XE 6.1e > > starter edition, to indicate unknown output values, I think, since > > X's for unknowns appeared post-map. They no longer appeared > > after I did the above and then some but not all of my output was > > actually correct. > > But that means nothing until you understand why the optomized result was > 'incorrect'. Remember that optomization does not involve changing the > overall function, just the implementation of that function. I have no doubt > that if you were to somehow disable every possible optomization that it > might be possible to have it emulate the code that you originally > wrote...but I'll content that it still won't work for you in a real device > either. > > > > > In my first sentence I wrote "I'm trying to get post-map simulating > > correctly now that I have post-translate simulating correctly." > > In other words I'm trying to get the post-map design correct as > > shown by the simulator, because it isn't. > > While it's possible (but not terribly probable) that there is a bug in the > synthesis tool the more likely explanation is in the source code that you > wrote. Synthesis to actual FPGA/CPLD/ASIC produces an output model that is > strictly std_logic/std_ulogic based...there are no 'enumerated types', > 'integers', etc. Those output models also model expected propogation delays > that will exist in the actual device. That being the case, here are the > things to look for and how to go about looking for them. > > - Peruse the synthesis report for warnings. If it runs across code that is > valid but is not well synthesized there will usually be a warning (the > classic example being the latch, signal 'initialization' values being > another one). Comb through those warnings and fix them. > - Peruse the timing report for timing conditions. Timing analysis produces > five basic numbers: setup time (for each input relative to the clock that > samples it), hold time (for each input relative to the clock that samples > it), clock to output delay, propogation delay (for pure combinatorial input > to output paths) and clock frequency. Now go back to the code for your > testbench and make sure that you are > - Not violating setup or hold time. > - Not violating clock frequency > - Not blaming the post route model when you look at outputs going to 'X' > at a time that is still within the clock to output delay or propogation > delay. > - Peruse your source code for ANY usage of a data type other than std_logic > or std_ulogic. Enumerated types and integers are not illegal and can easily > be synthesized but they are susceptible to misuse. The misuse comes about > because in the simulation environment signals/variable of those types will > get 'magically' initialized...there is typically no such magic in a real > device. You can write code for a counter using type 'natural' that will > simulate just fine but when that 'natural' is translated into 'std_logic' as > it must be to be synthesized the output model will not 'work' and will sit > there as an unknown value because the original code had nothing to reset it > to a known value. > > Those are the tools you need to debug your problem. Disabling optomizations > is not in that list and will only lead you down a path that will result in > your final design not working anyway. > > KJ Hi, 1. I agree fully with KJ who is an experienced author in this group. 2. I never do any post-map simulating with all 6-8 projects I have finished individually in Xilinx FPGA and all of them go to market successfully. 3. While doing simulation, just check if logic design is correct, don't have to check timing. 4. Let Xilinx compiler determine if the project meets its timing: a. setup timing; b. holding timing; c. running frequency; If Xilinx ISE tells there is no timing violation in the above 3 catagories, put the design in a chip, then test the board to see if there is any logic design error. Never spend time doing post-map simulation; Never spend time using DOS command lines; Never spend time turning off Xilinx's optimization; WengArticle: 108391
KJ wrote: > "zohair" <szohair@gmail.com> wrote in message > news:ee9e9f6.-1@webx.sUN8CHnE... > > Are there any approaches or constraints available to fix hold-time > > violations in the Xilinx ISE tool aimed at Virtex-2 devices? The P&R tool > > is supposed to automatically fix these, but if the timing results from P&R > > show hold violations, what is the recommended approach to eliminating > > them? In the ASIC world one would insert buffers in the failing path and > > do an incremental P&R compile - does this apply for Xilinx FPGAs as well? > > Hold violations in the FPGA world generally are because the clock signal is > a gated clock or generated internally in some fashion that can not be > distributed with little net delay. Generally speaking this is not good > design practice (at least no inside an FPGA) so the approaches are to > - Change constraints on the clock so that it is distributed with low delay > (i.e. use some global clock route network). > - Change the source code to generate the clock appropriately (i.e. in a way > that can be distributed via low delay). > > KJ Hi, I have an experience with Altera chip a few years ago. Hold violation means the signal disappear too soon, that means in my opinion the related equation is too simple to hold a longer time. I added an artificial signal: One <= '1'; And I included the One signal in the equation where hold time violation happened. Because the extra signal One was added into the problematic equation, the hold time violation was eliminated. You may follow my suit in the same way. I am sure you know where the equation is. WengArticle: 108392
zohair wrote: > Are there any approaches or constraints available to fix hold-time violations in the Xilinx ISE tool aimed at Virtex-2 devices? The P&R tool is supposed to automatically fix these, but if the timing results from P&R show hold violations, what is the recommended approach to eliminating them? In the ASIC world one would insert buffers in the failing path and do an incremental P&R compile - does this apply for Xilinx FPGAs as well? Answer from Peter Alfke: In a synchronous design, you clock all flip-flops with a common clock. The Q output of one flip-flop drives (through interconnect and perhaps also other logic) the D-input of the "downstream" flip-flop. Although all flip-flops are clocked together, the clock-to-Q plus other delays assure that the "old' data is held at the downstream flip-flop input until well after the clock edge. Perfect operation! If, however, the clock arrives at the downstream flip-flop very late, the old data may already have disappeared, and the "new" data will be clocked in, which is one form of a race condition. You never have this problem when you us the global clock distribution, for its skew is less than the propagation delay from one flip-flop to the other. But if you use local clock routing, or -heaven forbid- use clock gating or other unsavory methods, then you can (or will) create hold time problems. The solution is to "don't do that". Use an un-gated global clock, together with selective Clock Enable. Inserting extra delays in the data path is a dangerous Band-Aid method, only to be used in emergencies. Try to understand your problem first, before fixing it. Peter Alfke, XilinxArticle: 108393
Jim Granville wrote: > John Adair wrote: > > Some FPGA's like the Xilinx Virtex-4 (FX - PowerPC) come with a > > hard(always there) processor within the programmable fabric. There are > > also soft core microprocessors that use the general FPGA fabric for > > their implementation like MicroBlaze. > > > > In both cases the microprocessor cores generally can do what their > > standalone equivalents can do. The difference comes in that unused FPGA > > fabric can be used to implement the any digitial function or interface > > you like within the bounds of the device size and speed. Some > > interfaces will need external buffering depending on the voltage levels > > e.g. a transceiver for CAN bus. > > > > The one weak area is replacing a microcontroller that has analogue > > interfaces. Currently there is virtually no analogue support in FPGA's > > so any analogue functions like A to D tend to need add on devices. > > Another weak area, wrt microcontrollers, is lack of on-chip FLASH code > memory. Some small FPGA cores/tasks can run out of BRAMs, and those get > quite closely equivalent to the smallest microcontroller CORES. > > Then there are common uC peripherals (besides ADCs) like low power > oscillators, calibrated RC oscillators, WDOGs, brown out detectors, > DACs, and 5V compatible pins.... All this means you could find a FPGA > used alongside a Microcontroller, with each doing what they are best at. > > -jg FPGA's do have some limitations, and are not appropriate in every case that requires a microcontroller, but they also have some important strengths for embedded processing applications. For example, if you need to interface with a device that has non-standard I/O, you can either do it in software, inefficiently, or write an interface that allows your software to interact with it in a standard way. IOW - you can push some of the difficult problems into hardware, making the software simpler. TFT LCD's, for example, can be tricky to deal with. In the SoC design that I am working on, though, the interface logic allows software to simply write to an address in memory. No waiting, no funky I/O routines - just a single STA <address> in a simple memcpy routine. That is why FPGA based SoC's can be very powerful.Article: 108394
I have 20 Altera EPM7064SLC44s that apparently have had their JTAG pins disabled, and can therefore only be erased by an 'out of system' Altera programmer. I have no such beast, but perhaps someone else here does and could help ? I'd hate to bin these 7064s. John Kortink -- Email : kortink@inter.nl.net Homepage : http://www.inter.nl.net/users/J.KortinkArticle: 108395
Hello, I am using the XilinX Virtex 4 FX 12 Evaluation Kit and want to get ethernet working (it will have to do some IP networking stuff). The board is equipped with an mii-interface so usign hard_temac and plb_temac should be enough (or am I wrong here?). I programmed the FPGA with using these IP cores (and those which the BSB preselected), but the ethernet switch I connected to the board does not recognize anything. Do I make any mistake? Or do I first need to get some software running on the integrated ppc which somehow initializes the ethernet interface? # ############################################################################## # Created by Base System Builder Wizard for Xilinx EDK 8.1.02 Build EDK_I.20.4 # Mon Sep 4 13:52:39 2006 # Target Board: Avnet Virtex-4 FX12 Evaluation Board Rev 1.0 # Family: virtex4 # Device: XC4VFX12 # Package: FF668 # Speed Grade: -10 # Processor: PPC 405 # Processor clock frequency: 100.000000 MHz # Bus clock frequency: 100.000000 MHz # Debug interface: FPGA JTAG # On Chip Memory : 16 KB # Total Off Chip Memory : 36 MB # - FLASH_2Mx16 = 4 MB # - DDR_SDRAM_1 = 32 MB # ############################################################################## PARAMETER VERSION = 2.1.0 [...] PORT fpga_0_Ethernet_MAC_PHY_Mii_mdc = fpga_0_Ethernet_MAC_PHY_Mii_mdc, DIR = O # # managment data io PORT fpga_0_Ethernet_MAC_PHY_Mii_mdio = fpga_0_Ethernet_MAC_PHY_Mii_mdio, DIR = IO # transmit data PORT fpga_0_Ethernet_MAC_PHY_Mii_txd = fpga_0_Ethernet_MAC_PHY_Mii_txd, DIR = O, VEC = [3:0] # transmit enable PORT fpga_0_Ethernet_MAC_PHY_Mii_txen = fpga_0_Ethernet_MAC_PHY_Mii_txen, DIR = O # transmit error PORT fpga_0_Ethernet_MAC_PHY_Mii_txerr = fpga_0_Ethernet_MAC_PHY_Mii_txerr, DIR = O # transmit clock PORT fpga_0_Ethernet_MAC_PHY_Mii_tx_clk = fpga_0_Ethernet_MAC_PHY_Mii_tx_clk, DIR = I # carrier sense PORT fpga_0_Ethernet_MAC_PHY_Mii_crs = fpga_0_Ethernet_MAC_PHY_Mii_crs, DIR = I # collision detect PORT fpga_0_Ethernet_MAC_PHY_Mii_col = fpga_0_Ethernet_MAC_PHY_Mii_col, DIR = I # receive data PORT fpga_0_Ethernet_MAC_PHY_Mii_rxd = fpga_0_Ethernet_MAC_PHY_Mii_rxd, DIR = I, VEC = [3:0] # receive data valid PORT fpga_0_Ethernet_MAC_PHY_Mii_rxdv = fpga_0_Ethernet_MAC_PHY_Mii_rxdv, DIR = I # receive error PORT fpga_0_Ethernet_MAC_PHY_Mii_rxerr = fpga_0_Ethernet_MAC_PHY_Mii_rxerr, DIR = I # receive clock PORT fpga_0_Ethernet_MAC_PHY_Mii_rxclk = fpga_0_Ethernet_MAC_PHY_Mii_rxclk, DIR = I PORT LVDS_N = lvds_n, DIR = IO, VEC = [0:29] PORT LVDS_P = lvds_p, DIR = IO, VEC = [0:29] [...] BEGIN plb_temac PARAMETER INSTANCE = plb_temac_0 PARAMETER HW_VER = 3.00.a PARAMETER C_BASEADDR = 0x81200000 PARAMETER C_HIGHADDR = 0x8120ffff BUS_INTERFACE MSPLB = plb BUS_INTERFACE V4EMACSRC = plb_temac_0_V4EMACSRC END BEGIN hard_temac PARAMETER INSTANCE = hard_temac_0 PARAMETER HW_VER = 3.00.a PARAMETER C_PHY_TYPE = 0 BUS_INTERFACE V4EMACDST0 = plb_temac_0_V4EMACSRC PORT MII_TXD_0 = fpga_0_Ethernet_MAC_PHY_Mii_txd PORT MII_TX_EN_0 = fpga_0_Ethernet_MAC_PHY_Mii_txen PORT MII_TX_ER_0 = fpga_0_Ethernet_MAC_PHY_Mii_txerr PORT MII_RXD_0 = fpga_0_Ethernet_MAC_PHY_Mii_rxd PORT MII_RX_DV_0 = fpga_0_Ethernet_MAC_PHY_Mii_rxdv PORT MII_RX_ER_0 = fpga_0_Ethernet_MAC_PHY_Mii_rxerr PORT MII_RX_CLK_0 = fpga_0_Ethernet_MAC_PHY_Mii_rxclk PORT MII_TX_CLK_0 = fpga_0_Ethernet_MAC_PHY_Mii_tx_clk PORT MDC_1 = fpga_0_Ethernet_MAC_PHY_Mii_mdc PORT MDIO_1 = fpga_0_Ethernet_MAC_PHY_Mii_mdio END -- (_G-N-U_) Benedikt Wildenhain <benedikt.wildenhain@g-n-u.de> o o Netzwerkadministrator und Software-Entwickler G-N-U GmbH, EDV-Dienstleistungen, http://www.g-n-u.deArticle: 108396
On 2006-09-10, Benedikt Wildenhain <benedikt@benedikt-wildenhain.de> wrote: > > I am using the XilinX Virtex 4 FX 12 Evaluation Kit and want to get Are you talking about the ML403? > but the ethernet > switch I connected to the board does not recognize anything. There will be an external PHY on your board. It's likely that you could hook your switch up to this without anything in the FPGA and it would negotiate a link speed and blink the RX lights. If that's not happening, solve that problem before you worry about the FPGA. -- Ben Jackson AD7GD <ben@ben.com> http://www.ben.com/Article: 108397
Which is the difference between Functional and Post-Synthesis Simulation? Why should I do both simulation? Thanks PeppeArticle: 108398
Manfred Balik wrote: > Hello, > I am using an Altera MAX3000A CPLD to make level conversion from 5V-TTL to > 3.3V-TTL (and further jobs...). > My problem is - I can't connect two bidirectional ports directly to get a > bidirectional connection. > I'm using Altera Quartus II, a direct connection produces an error message > and a simple VHDL-block doesn't solve the problem, too. > my VHDL-code is like: > ENTITY ... > port1 : INOUT STD_LOGIC; > port2 : INOUT STD_LOGIC; > ARCHITECTURE... > port1 <= port2; > port2 <= port1; > > How can I make a bidirectional connection between two bidirectional ports??? > thanks, Manfred Take a look at Analog Devices line of level translators - http://www.analog.com/en/subCat/0,2879,767%255F828%255F0%255F%255F0%255F,00.html The ADG3301/04/08 will connect 1/4/8-bit bidirectional buses together with 3-state control. MarcArticle: 108399
Hello Ben, On Sun, Sep 10, 2006 at 03:42:40PM -0500, Ben Jackson wrote: > On 2006-09-10, Benedikt Wildenhain <benedikt@benedikt-wildenhain.de> wrote: > > > > I am using the XilinX Virtex 4 FX 12 Evaluation Kit and want to get > Are you talking about the ML403? No, I am using the board sold by avnet, see http://shorl.com/dofrygrutogysy (main advantage: has LVDS pairs which I will need to make accessible using tcp/ip). > There will be an external PHY on your board. It's likely that you > could hook your switch up to this without anything in the FPGA and > it would negotiate a link speed and blink the RX lights. If that's > not happening, solve that problem before you worry about the FPGA. This doesn't seem to be the case: When I use an example project using opb_ethernet which directly uses the PHY the switch recognizes that there is a connection. (As I only have a evaluation IP core of opb_ethernet I want to avoid using this one) -- GPG-Key 1024D/E32C4F4B | www.gnupg.org | http://enigmail.mozdev.org Fingerprint = 9C03 86B5 CA59 F7A3 D976 AD2C 02D6 ED21 E32C 4F4B Mit freundlichen Gruessen | Kun afablaj salutoj (www.esperanto.org) May the tux be with you. :wq 73
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z