Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Yes, aiming for fmax is an exercise in extreme pipelining and is unlikely to do any good for one's sanity but as impractical as it may be, it is possible to do it for real designs... it just gets costly (consume FFs for routing and sacrifice LUTs), ugly (lots of code that does nothing except move data along) and inconvenient (more tricky/complex control logic) pretty quickly. Paul wrote: > As stated here.... for ANY fpga, the specified Fmax is a theoretical > max. Odds are you will never hit that if you are doing anything > vaguely complex. That assumes your logic as pipelined to the maximum > capabilities of the part - no more than one LUT of combinatorial logic > before a DFF - so it makes maximum use of the CLB layout. It also > assumes that routing delays are near negligible - i.e. assumes your > design can be routed such that you flow from one CLB directly to the > neiboring CLB. As soon as you go through more than one LUT of > combinatorial logic before a flop or pack your part tight enough that > your routes aren't all COMPLETELY ideal.... that is to say, as soon > as you put a real design in the part, that Fmax goes down depending > upon the details of your design.Article: 117351
Ace wrote: > Thank you to both Andreas and Paul. After reading your feedback, can I > make this conclusion "Spartan FPGA has it's own clock which can fit > most design". Huh... I think you got it backwards. FPGAs do not have "their own clock", you have to tell the synthesis tools what clock period you intend to operate the FPGA at unless you want the synthesis to run and optimize the design until timing improvements stall. If you place a 10ns period constraint on an input clock, the synthesis tools will tune the primitives' placement and routing until all components on each constrained clock net and derived clocks meet the specified and derived timing constraints. The maximum operating frequencies for specific functional blocks within the FPGA and common constructs are only general indications of the highest achievable clock rates in a carefully pipelined design. Just like a chain, your overall highest operating frequency is limited by the slowest functional block or combinational construct in each clock domain. The STA ("Static Timing Analysis") will tell you what the highest operating frequencies (minimum clock periods, actually) for each of your design's clock domains are and which paths have failed to meet timing specifications.Article: 117352
> FPGAs do not have "their own clock", you have to tell the synthesis tools > what clock period you intend to operate the FPGA at unless you want the > synthesis to run and optimize the design until timing improvements stall. > If you place a 10ns period constraint on an input clock, the synthesis > tools will tune the primitives' placement and routing until all components > on each constrained clock net and derived clocks meet the specified and > derived timing constraints. Thanks a lot Daniel! :D I must have myself sound so naive (well I am...) Cheers!Article: 117353
On Mar 25, 8:39 pm, "John McCaskill" <junkm...@fastertechnology.com> wrote: > On Mar 25, 12:32 am, "Allen" <lphp...@gmail.com> wrote: > > > > > > > On Mar 23, 8:33 pm, Zara <me_z...@dea.spamcon.org> wrote: > > > > On 23 Mar 2007 05:13:46 -0700, "Allen" <lphp...@gmail.com> wrote: > > > > >hi all, > > > > >first, i am sorry for my poor English. > > > > >i use EDK 7.1i and ISE 7.1i. > > > > >imported custom peripheral with PLB Master Interface ( not from IFIP ) > > > >into my .xps project after overcame several problems. > > > > >In the step " generate netlist " there has no error or warning. > > > > >but in the step "Generate Bitstream",i got a error message "ERROR!! > > > >NgdBuild:455 plb_M_ABUS<62> has multiple driver(s)": return code 2 > > > >abort. > > > > >already search this problem in xilix's answer database and tried > > > >modify the parameter of C_BaseAddr, but it is still stuck here. > > > > >does anyone meet this problem before? > > > > >thanks in advance. > > > > I always got that messaghe when I had some signal with two outputs > > > connected to it. That seems your case, in your plb address bus, master > > > interface. > > > > Best regards, > > > > Zara- Hide quoted text - > > > > - Show quoted text - > > > Thanks for your reply. > > > so it might mean something wrong during import of custom peripheral? > > > but in 64-bit PLB protocol, the address width is 32-bit. > > > i already use (C_PLB_AWIDTH-1) to replace with constant "31" in my > > port declaration. > > > do anything i could try to solve this problem? > > > thank you :-) > > The PLB address width is 32 bits. However, the way that all the bus > signals are connected to the PLB IP is that they are concatenated > together. So if you look at the MPD file for the PLB you will see that > it defines the bus width to be 32 bits times the number of masters: > > PORT M_ABus = M_ABus, DIR = I, VEC = [0: > (C_PLB_NUM_MASTERS*C_PLB_AWIDTH)-1] > > So the signal plb_M_ABUS<62> is bit 30 of the second master. Look to > see if your core has been assigned the second master slot on the PLB > bus. Assuming that is the case, find what two sources are driving bit > 30 of the address. > > Assuming that you left the name of your EDK project as system.xmp, > when you tell EDK to generate a netlist it will create a top level hdl > file named either system.vhd or system.v depending on your tool > settings. You can look at this file to see how EDK has connected the > cores to the PLB. > > It has been a while since I had to find a multi source signal, but I > think that XST will produce a warning about it in its report and tell > you what the multiple soures are. Look in the synthesis report file > for the appropriate core and see if it tells you what the source of > the problem is. > > Since you are creating your own interface design instead of using the > IPIF, are you using the bus functional models in your simulations? I > use these, and they make the job much easier. I think it was not > until EDK 8.1 that they were integrated into EDK itself, but I was > able to use the CoreConnect tool kit directly from IBM in some of our > early stuff. The bus monitors will tell you as soon as your core has > done something wrong, so you do not have to track the source of the > problem back from when the symptoms show up. > > Regards, > > John McCaskillwww.fastertechnology.com- Hide quoted text - > > - Show quoted text - Thanks for your reply. :-) Where i could see the second master is who( power pc or custom peripheral ... etc)? I don't know where to check this. I opened the system.vhd to see who connect to the plb_M_Abus. "PowerPC" have 2 ports and Custom peripheral has 1 port connect to the plb_M_ABus. Next step, I am going to find the XST report file to see who drive the plb_M_ABUS<62>. I didn't run the simulation of this platform. I heard the "bus functional model" before, but I don't understand how to use it. Would you like to give any information about this one? In addition the error"NgdBuild 455", there are several warnings "SFF Primitive", i didn't find this on the Xilinx answer database.. maybe this warning has something to do with the error. Thank you very much~Article: 117354
Is there anyone who has created a VHDL, Verilong or Schematics "WaterShed Transform"?. It is an image transform and I am not sure if it is possible to implement it. Thanks, PabloArticle: 117355
Hi all I am trying running uClinux on MicroBlaze. Memory Test Demo is going all right. In XMD, I download the imagebin with the cmd "dow -data image.bin 0x24000000", and then "con 0x24000000". But got the feedback and no uClinux bootup in TTPro. ################# XMD% con 0x24000000 Processor started. Type "stop" to stop processor RUNNING> XMD% ERROR:MDT - MicroBlaze Pipeline Stalled executing Instruction at >> PC: 0x00000010 Try Resetting the Processor to Continue.. ################ What's the error? Wish u can help me. Thanks Regards Orlando SunArticle: 117356
Herbert Kleebauer wrote: > 1. I have tried to find an actual FPGA with a package which can > be soldered with a non professional equipment, something like > a PLCC84 where you can get cheap sockets which can be used on Does anybody know whether there is a company which sells Spartan3 chips (or any other FPGA type) already soldered to an adaptor board like: http://www.rsonline.de/cgi-bin/bv/rswww/searchBrowseAction.do?N=0&Ntk=I18NAll&Ntt=295-4331 Preferable with the GND an VCC pins already connected and capacitors on the adapter.Article: 117357
"Daniel S." <digitalmastrmind_no_spam@hotmail.com> writes: > Martin Thompson wrote: >> "Daniel S." <digitalmastrmind_no_spam@hotmail.com> writes: >> >>> Until my larger designs are sufficiently advanced to start producing >>> meaningful simulation results (I call this the approximation phase), I >>> am more interested in tracking resource utilization and static timing >>> evolution than correctness: achieving absolute correctness in the >>> first pass (the one most susceptible to typos and syntax errors) is >>> useless if the implementation fails to meet target timings or exceeds >>> the logic budget. It is during these passes that XST&all die on me the >>> most and Modelsim as a simulation tool is irrelevant - that's why it >>> took me so long to think of using it as a syntax checker. >> >> But if you make a trivial error in your design, won't the optimiser >> chuck lots of stuff out and make it look much better than it is? >> You'll never know until you simulate... > > When the optimizer chucks out lots of stuff, it usually spits out > truckloads of warnings about unconnected signals being trimmed, never > changing ("replaced by logic") or equivalent to some other net. Some > of these trimmed signals are expected (is there a simple and > inexpensive way to quietly drop multiplier LSBs when doing fixed-point > maths?) but the rest are good indicators of missing stuff. Yes, I get loads of warnings flagged about dropped bits due to my tendency to write code that is "easy to reuse 'cos the optimiser will drop the bits it needs to". That means important stuff can get lost in the morass! > > Since I do not wish to spend much time on control logic during the > approximation phase, I simply connect the N control signals to an > N-bits PRNG with a few IOBs blended in to prevent synthesis from > simplifying the dummy control logic and everything else > thereafter. Once the data/processing pipeline is in place and looks > good, I can start implementing and optimizing the real control logic > using simulations - I usually try to save as much of the control logic > as possible for last since relatively minor data path refinements can > have a major impact on the control logic and optimization > opportunities... I am lazy so I try to avoid moving targets whenever > possible. That must be where we differ then - most of my designs, its the control logic that takes most of the effort, especially handling error cases. The actual "processing" isn't usually my problem... Now, off to write code which will no doubt prove me wrong within an hour ;-) Cheers, Martin -- martin.j.thompson@trw.com TRW Conekt - Consultancy in Engineering, Knowledge and Technology http://www.conekt.net/electronics.htmlArticle: 117358
"Herbert Kleebauer" <klee@unibwm.de> wrote in message news:460B70A6.4096E0CA@unibwm.de... > Herbert Kleebauer wrote: > >> 1. I have tried to find an actual FPGA with a package which can >> be soldered with a non professional equipment, something like >> a PLCC84 where you can get cheap sockets which can be used on > > Does anybody know whether there is a company which sells Spartan3 > chips (or any other FPGA type) already soldered to an adaptor board > like: > > http://www.rsonline.de/cgi-bin/bv/rswww/searchBrowseAction.do?N=0&Ntk=I18NAll&Ntt=295-4331 > > Preferable with the GND an VCC pins already connected and capacitors > on the adapter. > Hi Herbert, Apart from the ones John mentioned? http://www.enterpoint.co.uk/component_replacements/craignell.html Also, this might help. http://www.fpga-faq.com/FPGA_Boards.shtml I wonder, given that you're in the computer science faculty, have you spoken to the guys in EE to ask for their help? Just a thought... Good luck, Syms.Article: 117359
On 29 Mar 2007 00:27:07 -0700, "Pablo" <pbantunez@gmail.com> wrote: >Is there anyone who has created a VHDL, Verilong or Schematics >"WaterShed Transform"?. It is an image transform and I am not sure if >it is possible to implement it. > This may not help, but I would take a look at the "Level Set Method" which seems to be related. - BrianArticle: 117360
On Mar 27, 10:11 am, "GaLaKtIkUs=E2=84=A2" <taileb.me...@gmail.com> wrote: > MIG 1.7 is already cited in the new versions of DDR2 SDRAM related > application notes. > There are also a few answer records about it. > But there is no dowload link on Xilinx's site :(( > > Mehdi Its now available at the address http://www.xilinx.com/xlnx/xil_ans_display= .jsp?getPagePath=3D24541 MehdiArticle: 117361
Hi Mehdi, Thanx a lot. Bye Helmut On 29 Mrz., 13:18, "GaLaKtIkUs=E2=84=A2" <taileb.me...@gmail.com> wrote: > On Mar 27, 10:11 am, "GaLaKtIkUs=E2=84=A2" <taileb.me...@gmail.com> wrote: > > > MIG 1.7 is already cited in the new versions of DDR2 SDRAM related > > application notes. > > There are also a few answer records about it. > > But there is no dowload link on Xilinx's site :(( > > > Mehdi > > Its now available at the addresshttp://www.xilinx.com/xlnx/xil_ans_displa= y=2Ejsp?getPagePath=3D24541 > > MehdiArticle: 117362
Hi all I'm a student of the UAM (University of Madrid, Spain). My TEA (pre thesis) is based on this board. I present it in 3 days, and I have broken the only board I have. I need desesperetly someone who can sell me one (or more) used/new fx12mini module board. I think that you can imagine how important is for me this board. So if you know someone who use this board or you have one you can sell me, please let me know. Thank you very much for you help.Article: 117363
Hi everyone, I have an FPGA board with two ethernet MAC interfaces. I want to connect both interfaces in a way that transmitter of one EMAC IF is connected with reciever of other and vice versa. I have to have seemless connection. Please let me know whether I would be able to achieve it without putting EMAC core in FPGA. Is there any issues of MII that I am supposed to deal with except clock synchronization that can be dealt by using Block ram or some other buffering thing. Looking for some timely reply MADNANArticle: 117364
On 29 Mrz., 00:10, "John_H" <newsgr...@johnhandwork.com> wrote: > <jid...@hotmail.com> wrote in message > > news:1175119186.275874.78300@e65g2000hsc.googlegroups.com... > > > > > Thanks for sharing the schematics! > > > Hmm...the TDO and voltage sense line are the exactly the same as > > orginally posted by Xilinix. For the other JTAG lines (TMS, TCK, TDI), > > two schmitt-triggers and a buffer were used to increase noise > > immunity. Thats a little bit too many components for my taste, but > > seems to be reliable. > > Seems to be reliable as in... you've already tried it and seen your troubles > go away? > > Your problem may not be on the Schmidt trigger side of your buffers but in > the interface between the PC and the programmer chips. > > Have you tried your existing programmer on a different PC to see if the > behavior is identical or vastly different? > > You could modify the Parallel-III style design with comparators and a > low-voltage DC-DC boost converter with fixed 5V output to power the > LPT-compatible circuit with a variable logic level I/O to get closer to > Parallel-IV performance with a Parallel-III interface. I have tried Parallel-III cable with 2 PC's, and with both of them this problem existed.Article: 117365
On 29 Mrz., 01:04, "comp.arch.fpga" <ksuli...@googlemail.com> wrote: > jid...@hotmail.com schrieb: > > > Since this problem applies to more than one board, I assume the > > problem must lay on the programmer itself. My explanation is that > > maybe the buffer IC's get hot or something...I really don't know. > > I want to know if there is anyone who has had problems with this > > programmer cable. > > As I posted before, there are many issues with parallel cable 3, > especially > in conjunction with old versions of impact. (The old software versions > simultaneously turn of > the output enable and change the data output value resulting in a race > condition) > > The main issue is, that the parallel port has TTL logic levels that > only have a guaranteed V_OH of > 2V. But a 5V powered 74HC125 has a threshold voltage of 2.5V which > might never be reached. > A 3.3V powered 75HC125 still has a threshold of 1.65V. In that range > the slope of a parallel port output > can be very slow, but the 74HC is quite fast. As a result noise on the > parallel port will amplify to > many clock edges to TCK. > > You can solder 2k resistors between each buffers input and output pin. > (0402 SMT fits perfectly) > This creates a schmitt trigger with a large hysteresis which helps a > lot. > > For building your own cable there are CMOS families that have > thresholds of 1/3 VCC instead of > 1/2 VCC. > A 50mV hysteresis is not enough, so you need the positive feedback > from output to input in all cases. > > Kolja Sulimma Thanks for your detailed reply Kolja, it sounds logical to me. But there is one thing still unexplained to me and that is although the Parallel-III programmer doesn't work at the beginning, when it starts to work after many tries, it continues to work for hours with no problems.Why??? It seems to me it has something to do with the temperature of the buffer IC's!Article: 117366
Thanks Gabor, For ur advices. I came to know about the ONFI group. They are into standardizing the flash interface. But samsung and toshiba is still not in. Any idea about its future. I thought it would be better if we can some how convert other vendors statndards to a common standard. And if i choose onfi is there any risk involved?? And just one help can u suggest me a site from which i get the latest market analysis in flash products with pie charts etc. i searched but got all very old one (i am still a baby in googling). regards SumeshArticle: 117367
Pipelining the four banks am I right? I will study this since this is the first time I handle SDRAM. Thank you for the pointer. "Gabor" <gabor@alacron.com> wrote in message news:1175116871.777082.190590@p15g2000hsd.googlegroups.com... > On Mar 28, 11:21 am, "Oli Charlesworth" <c...@olifilth.co.uk> wrote: >> On Mar 28, 3:36 pm, "news reader" <newsrea...@google.com> wrote: >> >> >> >> > In my interleaver design for FPGA, I am using an external SDRAM for >> > data >> > storage. >> >> > The clock cycles required to write a frame into the RAM and read a >> > frame >> > back to error correction unit ain't enough. >> >> > The interleaver has 40 rows, which contain 200 * 0, 1, ...39 pieces of >> > data. >> > And one row of the RAM contains 256 data. The write/read pointers are >> > increased by 200*i or decreased by 200*i (0<i<39) for each write/read >> > operation. >> >> > As a result, nearly everytime I write one data or read one data, I have >> > to >> > go >> > through a "open a new row, write or read 1 data, close the row" cycle. >> > To >> > open >> > a row and close it, the memory requires some 10 clock cycles. >> >> > How is it possible to design it in such a way that memory write is in >> > sequential >> > order? That is, when a new frame arrives, I write into the RAM column >> > by >> > until current row is filled, then open the next row. >> >> > I may have to read in a random access, but I can save a lot of clock >> > cycles >> > in >> > the memory write. >> >> > FYI, my resources is some RTL logic and an SDRAM. The design can be >> > made >> > with the FPGA's LUTs, but i don't own the resource. >> >> If you write out by hand the order in which data comes out of a depth- >> N interleaver, you should be able to spot the pattern (it's really not >> a very complicated pattern...). >> >> You can then just apply this pattern to the read pointer, letting you >> store your data in the original (sequential) order. In other words, >> the interleaving is done during the read operation. >> >> However, the read access pattern will be just as "random" as the write >> access pattern was in your original design. So unless your SDRAM's >> read cycle is shorter than its write cycle, this won't save you any >> time. >> >> -- >> Oli > > > I'm not sure how much it would help in your application unless your > interleave factor is constant, but using the 4 banks of the SDRAM is > generally a faster way to deal with high-speed data than attempting to > locate all data in a single row. SDRAM was designed such that the > row activate and precharge can be "buried" behind other operations > when multiple banks are used. So in essence if you can order your > storage such that each new data comes from a different bank in the > SDRAM you can cut your cycle time down significantly. Using the > minimum burst size (assuming you don't need more than one word > of data at a time) and read with autoprecharge (to avoid the > additional > command to "close" the row) you can get down to 2 cycles per read if > you rotate through all four banks. A control sequence for a regular > rotation through 4 banks might look like: > --- Startup section with some wasted cycles --- > Activate Row in Bank 0 > NOP > Activate Row in Bank 1 > NOP > Activate Row in Bank 2 > --- Continuous high-bandwidth section > Read Col in Bank 0 with autoprecharge > Activate Row in Bank 3 > Read Col in Bank 1 with autoprecharge > Activate Row in Bank 0 > Read Col in Bank 2 with autoprecharge > Activate Row in Bank 1 > Read Col in Bank 3 with autoprecharge > Activate Row in Bank 2 > Read Col in Bank 0 with autoprecharge > Activate Row in Bank 3 > Read Col in Bank 1 with autoprecharge > Activate Row in Bank 0 > Read Col in Bank 2 with autoprecharge > Activate Row in Bank 1 > Read Col in Bank 3 with autoprecharge > Activate Row in Bank 2 > Read Col in Bank 0 with autoprecharge > Activate Row in Bank 3 > Read Col in Bank 1 with autoprecharge > --- Closing section with some wasted cycles > NOP > Read Col in Bank 2 with autoprecharge > NOP > Read Col in Bank 3 with autoprecharge > > Note that the mid-section can be repeated ad-naseum allowing > unlimited access length at this bandwidth. Also note that > each access includes its own Row and Column so other than > the bank restriction the order can be truly random. > > HTH, > Gabor >Article: 117368
"Oli Charlesworth" <catch@olifilth.co.uk> wrote in message news:1175095276.957406.238840@r56g2000hsd.googlegroups.com... > On Mar 28, 3:36 pm, "news reader" <newsrea...@google.com> wrote: >> In my interleaver design for FPGA, I am using an external SDRAM for data >> storage. >> >> The clock cycles required to write a frame into the RAM and read a frame >> back to error correction unit ain't enough. >> >> The interleaver has 40 rows, which contain 200 * 0, 1, ...39 pieces of >> data. >> And one row of the RAM contains 256 data. The write/read pointers are >> increased by 200*i or decreased by 200*i (0<i<39) for each write/read >> operation. >> >> As a result, nearly everytime I write one data or read one data, I have >> to >> go >> through a "open a new row, write or read 1 data, close the row" cycle. To >> open >> a row and close it, the memory requires some 10 clock cycles. >> >> How is it possible to design it in such a way that memory write is in >> sequential >> order? That is, when a new frame arrives, I write into the RAM column by >> until current row is filled, then open the next row. >> >> I may have to read in a random access, but I can save a lot of clock >> cycles >> in >> the memory write. >> >> FYI, my resources is some RTL logic and an SDRAM. The design can be made >> with the FPGA's LUTs, but i don't own the resource. > > > If you write out by hand the order in which data comes out of a depth- > N interleaver, you should be able to spot the pattern (it's really not > a very complicated pattern...). > > You can then just apply this pattern to the read pointer, letting you > store your data in the original (sequential) order. In other words, > the interleaving is done during the read operation. > > However, the read access pattern will be just as "random" as the write > access pattern was in your original design. So unless your SDRAM's > read cycle is shorter than its write cycle, this won't save you any > time. > > > -- > Oli > Yeah, this is what I am looking for. For this algorithm, my question is, what is the condition when I am able to write from memory address 0 again when the first iteration is over, without overwritting the data already in the memory. I was aware that, the read operations are with random addresses, however writes are sequential. If the write pointer flips back from upper bound to address zero too early, it may overwrite some data. How do I calculate the upper bound?Article: 117369
On Mar 29, 12:45 am, "Allen" <lphp...@gmail.com> wrote: > On Mar 25, 8:39 pm, "John McCaskill" <junkm...@fastertechnology.com> > wrote: > > > > > On Mar 25, 12:32 am, "Allen" <lphp...@gmail.com> wrote: > > > > On Mar 23, 8:33 pm, Zara <me_z...@dea.spamcon.org> wrote: > > > > > On 23 Mar 2007 05:13:46 -0700, "Allen" <lphp...@gmail.com> wrote: > > > > > >hi all, > > > > > >first, i am sorry for my poor English. > > > > > >i use EDK 7.1i and ISE 7.1i. > > > > > >imported custom peripheral with PLB Master Interface ( not from IFIP ) > > > > >into my .xps project after overcame several problems. > > > > > >In the step " generate netlist " there has no error or warning. > > > > > >but in the step "Generate Bitstream",i got a error message "ERROR!! > > > > >NgdBuild:455 plb_M_ABUS<62> has multiple driver(s)": return code 2 > > > > >abort. > > > > > >already search this problem in xilix's answer database and tried > > > > >modify the parameter of C_BaseAddr, but it is still stuck here. > > > > > >does anyone meet this problem before? > > > > > >thanks in advance. > > > > > I always got that messaghe when I had some signal with two outputs > > > > connected to it. That seems your case, in your plb address bus, master > > > > interface. > > > > > Best regards, > > > > > Zara- Hide quoted text - > > > > > - Show quoted text - > > > > Thanks for your reply. > > > > so it might mean something wrong during import of custom peripheral? > > > > but in 64-bit PLB protocol, the address width is 32-bit. > > > > i already use (C_PLB_AWIDTH-1) to replace with constant "31" in my > > > port declaration. > > > > do anything i could try to solve this problem? > > > > thank you :-) > > > The PLB address width is 32 bits. However, the way that all the bus > > signals are connected to the PLB IP is that they are concatenated > > together. So if you look at the MPD file for the PLB you will see that > > it defines the bus width to be 32 bits times the number of masters: > > > PORT M_ABus = M_ABus, DIR = I, VEC = [0: > > (C_PLB_NUM_MASTERS*C_PLB_AWIDTH)-1] > > > So the signal plb_M_ABUS<62> is bit 30 of the second master. Look to > > see if your core has been assigned the second master slot on the PLB > > bus. Assuming that is the case, find what two sources are driving bit > > 30 of the address. > > > Assuming that you left the name of your EDK project as system.xmp, > > when you tell EDK to generate a netlist it will create a top level hdl > > file named either system.vhd or system.v depending on your tool > > settings. You can look at this file to see how EDK has connected the > > cores to the PLB. > > > It has been a while since I had to find a multi source signal, but I > > think that XST will produce a warning about it in its report and tell > > you what the multiple soures are. Look in the synthesis report file > > for the appropriate core and see if it tells you what the source of > > the problem is. > > > Since you are creating your own interface design instead of using the > > IPIF, are you using the bus functional models in your simulations? I > > use these, and they make the job much easier. I think it was not > > until EDK 8.1 that they were integrated into EDK itself, but I was > > able to use the CoreConnect tool kit directly from IBM in some of our > > early stuff. The bus monitors will tell you as soon as your core has > > done something wrong, so you do not have to track the source of the > > problem back from when the symptoms show up. > > > Regards, > > > John McCaskillwww.fastertechnology.com-Hide quoted text - > > > - Show quoted text - > > Thanks for your reply. :-) > > Where i could see the second master is who( power pc or custom > peripheral ... etc)? > I don't know where to check this. While there may be an easier way, you can just look at a PLB master signal that is not a vector. For example, if you look at plb_M_RNW in your system.vhd you will see that it is defined as: signal plb_M_RNW : std_logic_vector(0 to 2) The entire vector is an input to the plb_wrapper. plb_M_RNW(0) will go to PLB master 0, plb_M_RNW(1) will go to PLB master 1, etc. > > I opened the system.vhd to see who connect to the plb_M_Abus. > "PowerPC" have 2 ports and Custom peripheral has 1 port connect to the > plb_M_ABus. > Next step, I am going to find the XST report file to see who drive the > plb_M_ABUS<62>. No offense intended, but with just the PowerPC, and your new custom peripheral on the PLB bus the odds are that your peripheral is the source of the problem. Take a look at the synthesis report for it. In EDK, in the "Project Information Area" pane, and the Project tab, expand the "Report Files" selection. Find the one for your peripheral and look through it. I think there should be a warning about multiple sources driving a destination at this point. If you do not see it there, try looking through the implementation/xflow.log file in the "Log Files" section. > > I didn't run the simulation of this platform. I heard the "bus > functional model" before, but I don't understand how to use it. Would > you like to give any information about this one? > Take a look at: http://www.xilinx.com/ise/embedded/edk6_3docs/bfm_simulation.pdf This link is for the EDK 6.3 version of the documentation, but you should have a more recent version in your EDK distribution at $EDK/doc/ bfm_simulation.pdf. One of the main things that I like about using the bus functional models is that the bus monitors tell you when a error occurred in the simulation, and what the error was. This saves you the effort of having to search backwards from where the symptoms of the error show up to figure our what has gone wrong. > In addition the error"NgdBuild 455", there are several warnings "SFF > Primitive", i didn't find this on the Xilinx answer database.. maybe > this warning has something to do with the error. > > Thank you very much~ Regards, John McCaskill www.fastertechnology.comArticle: 117370
On Mar 29, 8:10 am, "Adnan" <madnan.ras...@gmail.com> wrote: > Hi everyone, I have an FPGA board with two ethernet MAC interfaces. I > want to connect both interfaces in a way that transmitter of one EMAC > IF is connected with reciever of other and vice versa. I have to have > seemless connection. Please let me know whether I would be able to > achieve it without putting EMAC core in FPGA. Is there any issues of > MII that I am supposed to deal with except clock synchronization that > can be dealt by using Block ram or some other buffering thing. > Looking for some timely reply > > MADNAN 1. If you have a MAC/PHY integrated chip, then just connect the 2 PHY outputs together with or without transformer 2. If your MAC has RMII output, just wire the RMIIs together on the 2 MACs 3. If you have only MII, inside the FPGA you have to do the MII buffering, to interface the 2 MACs together. Here is one reference design: http://www.intel.com/design/network/products/lan/applnots/vhdl-code.htm http://www.intel.com/design/network/products/lan/docs/LXT973_docs.htm ZoltanArticle: 117371
Symon wrote: > "Herbert Kleebauer" <klee@unibwm.de> wrote in message > > Does anybody know whether there is a company which sells Spartan3 > > chips (or any other FPGA type) already soldered to an adaptor board > > like: > > > > http://www.rsonline.de/cgi-bin/bv/rswww/searchBrowseAction.do?N=0&Ntk=I18NAll&Ntt=295-4331 > > > > Preferable with the GND an VCC pins already connected and capacitors > > on the adapter. > > > Hi Herbert, > Apart from the ones John mentioned? > http://www.enterpoint.co.uk/component_replacements/craignell.html > > Also, this might help. > > http://www.fpga-faq.com/FPGA_Boards.shtml Yes I know, there are many development boards, but we don't want a board but the chip. As there is a big difference whether you make a design at gate level using a schematic entry or use high level VHDL code, there also is a big difference whether you buy a CPU and built a computer system or you buy a ready to use motherboard to built a computer system. The same is true for FPGA's and ready to use FPGA development boards. You have to go at least once to the low level to understand the problems, then you can do it at a higher level.Article: 117372
Martin Thompson wrote: > "Daniel S." <digitalmastrmind_no_spam@hotmail.com> writes: > >> Since I do not wish to spend much time on control logic during the >> approximation phase, I simply connect the N control signals to an >> N-bits PRNG with a few IOBs blended in to prevent synthesis from >> simplifying the dummy control logic and everything else >> thereafter. Once the data/processing pipeline is in place and looks >> good, I can start implementing and optimizing the real control logic >> using simulations - I usually try to save as much of the control logic >> as possible for last since relatively minor data path refinements can >> have a major impact on the control logic and optimization >> opportunities... I am lazy so I try to avoid moving targets whenever >> possible. > > That must be where we differ then - most of my designs, its the > control logic that takes most of the effort, especially handling error > cases. The actual "processing" isn't usually my problem... Re-read my paragraph... I think you misread it: I fully agree that control logic is indeed the most complicated part of most designs. What I said is that I keep control for late/last to avoid having to recode it should any significant alterations in the pipeline become necessary due to timing/area/other constraints. > Now, off to write code which will no doubt prove me wrong within an > hour ;-) "Hey Martin, there's a new feature we added to the specs... you will have six more control bits and two extra pipeline stages to manage. Unfortunately for you, the extra stages and controls sit smack in the middle of the two spots that gave you so many headaches over the last week or two." Of course, waiting until the data path is completed only reduces the likelihood of late changes ruining control efforts, it does not immunize against late changes... and it still requires a "disposable" (minimum effort) controller for preliminary data path testing.Article: 117373
On Wed, 28 Mar 2007 10:12:15 -0700, jidan1 wrote: > Hi, > > To program my Atmel(ATmega128L) controller and Xilinx FPGA (sparta-3 > XCS400) at the same time, I decided as a programmer to use the Xilinx > Parallel Cable III. I implemented the programmer 100% the same as found > in Xilinx's website ( > http://toolbox.xilinx.com/docsan/xilinx4/data/docs/pac/appendixb.html). > The programmer worked, but not without problems. For programming the > Atmel uC I used AVRdude, and for the FPGA, ISE9.1i. The ribbon-cable > from the LPT port to Programmer was 20cm long and from the programmer to > the uC/FPGA board was not more than 10 cm. The problem related with this > programmer were verification errors, i.e the PC can't program or read > properly from the board. The interesting thing is how these verification > problem came up. In the morning when I turn the PC and Board on, these > verification errors are a lot. When the code that I want to download is > big, its impossible to program the FPGA/uC, for small codes it works but > after many tries. After 10 tries or so, the programming works correct > and no verification errors no matter how many times I try to download > the code or how big the code is, and this without even touching a thing > on the hardware!!! > > Since this problem applies to more than one board, I assume the problem > must lay on the programmer itself. My explanation is that maybe the > buffer IC's get hot or something...I really don't know. I want to know > if there is anyone who has had problems with this programmer cable. > > > Thank You, > JJ I have a simple upgrade to the parallel cable III that has Schmitt triggers on the clock, and a simple shunt regulator with an LED that limits internal VCC to about 4V, giving TTL compatible LPT interface even if used with 5V JTAG. It works reliably with 2.5V to 5V JTAG chips I have PCBs which I can send for free as long as its in the USA (1"x1" card in envelope) Requires SMT assy Peter WallaceArticle: 117374
Peter Wallace <pcw@karpy.com> wrote: ... > > I have a simple upgrade to the parallel cable III that has Schmitt > triggers on the clock, and a simple shunt regulator with an LED that > limits internal VCC to about 4V, giving TTL compatible LPT interface even > if used with 5V JTAG. > It works reliably with 2.5V to 5V JTAG chips > I have PCBs which I can send for free as long as its in the USA (1"x1" > card in envelope) Requires SMT assy > A combination of LVC Single Gate Schmitt Trigger and the LVC1T45 level translator also works fine Bye -- Uwe Bonnes bon@elektron.ikp.physik.tu-darmstadt.de Institut fuer Kernphysik Schlossgartenstrasse 9 64289 Darmstadt --------- Tel. 06151 162516 -------- Fax. 06151 164321 ----------
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z