Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Nicolas Matringe wrote: > Peter Alfke a =E9crit : > > Rule #1: Never feed an asynchronous input into more than one > > synchronizer flip-flop. Never ever. > > Got bitten once there. Never twice. Is it possible that this situation could occur if register duplication is enabled (to improve timing) in the tools (eg XST)? If so. is there a method to mark the synchronizer in HDL to ensure it is never automatically duplicated? TomArticle: 111226
ghelbig@lycos.com wrote: > You can't; the serial port is a debug console for the embedded > processor. > > To program the device, you need a Xilinx-compatable JTAG adapter. > > GH Avnet do a few boards so I can't quite locate the particular device you are referring to. For example, the Xilinx=AE Virtex=99-II Pro FG456 Kit requires a JTAG adapter of some description to program it. I use a Xilinx Parallel Cable III for my development board. I don't know how much they are today from Xilinx as mine came with the rather expensive development software from Xilinx many years ago! There is a design for a homebrew parallel cable on the web (see website http://toolbox.xilinx.com/docsan/xilinx4/data/docs/pac/appendixb.html). I have not used this design before so I cannot say with any confidence that it works. STOP: Before using homebrew programming cables - double check (no, triple check) the power supply arrangements. Getting +5V confused with +3.3V will lead to lots of expensive, smelly smoke! DaveArticle: 111227
The first synchronizing flip flop on an async input will experience metastability, sooner or later. Whether that metastability lasts long enough to cause a functional problem depends on how the output is used. If it becomes a causal input (i.e. clk or async rst) to something else, it can become a problem very quickly (read: "don't do that"). If there is very little timing margin to the next non-causal (i.e. D, CE, or sync rst) input(s), then it can also cause problems fairly quickly. The admonishment to "add a second flop" is usually an attempt to create a high slack/margin path to the next clocked element, but may not be sufficient. Ideally, that path (or any path out of the first synchronizing flop) should be constrained to be faster than the clock period would indicate, to force the synthesis/P&R process to provide extra timing margin (slack), in case MS should delay the output a bit. The more slack/margin, the more immunity to MS a design has. Also, the first synchronizing flop on an input should have a no-replicate constraint on it, just in case the synth/P&R tool wants to replicate it to solve fanout problems from that first flop. Also recognize that even async rst/prst inputs to flops must be properly synchronized with respect to the deasserting edge, since that edge is effectively a "synchronous" input, subject to setup/hold requirements too. Whether or not a problem is caused by metastability or by improper synchronization, it is still solved by the same proper synchronization techniques. It is true that MS has been reduced significantly by the newer, faster FPGA devices, but it is not totally eliminated, and the higher speeds & tighter timing margins of designs implemented in these FPGAs at least partially offset the improvements in MS in the flops themselves. Follow the guidelines in the app notes for simultaneous switching outputs, and properly ground/bypass the on-board PDS, and ground bounce will not be an issue. Once it becomes an issue, there are numerous "creative" solutions to the problem, but they are best avoided up front. Andy John Kortink wrote: > On 27 Oct 2006 14:25:09 -0700, "Peter Alfke" <peter@xilinx.com> wrote: > > >To paraphrase Karl Marx: > >A spectre is haunting this newsgroup, the spectre of metastability. > >Whenever something works unreliably, metastability gets the blame. But > >the problem is usually elsewhere. > > In my experience, ground bounce is a bigger problem. > Especially in a device that is nearly 'full' it is > wise to invest in a few fabricated grounds (dedicate > a pin at a strategic location, i.e. as far away as > possible from other ground pins, drive it to ground, > and tie it to ground externally). > > When you find that moving cells around alleviates or > intensifies observed instabilities, you may want to > look into ground bounce problems. > > >Metastability causes a non-deterministic extra output delay, when a > >flip-flop's D input changes asynchronously, and happens to change > >within an extremely narrow capture window (a tiny fraction of a > >femtosecond !). This capture window is located at an unknown (and > >unstable) point somewhere within the set-up time window specified in > >the data sheet. The capture window is billions of times smaller than > >the specified set-up time window. The likelihood of a flip-flop going > >metastable is thus extremely small. The likelihood of a metastable > >delay longer than 3 ns is even less. > >As an example, a 1 MHz asynchronous signal, synchronized by a 100 MHz > >clock, causes an extra 3 ns delay statistically once every billion > >years. If the asynchronous event is at 10 MHz, the 3 ns delay occurs > >ten times more often, once every 100 million years. > >But a 2.5 ns delay happens a million times more often ! > >See the Xilinx application note XAPP094 > >You should worry about metastability only when the clock frequency is > >so high that a few ns of extra delay out of the synchronizer flip-flop > >might causes failure. The recommended standard procedure, > >double-synchronizing in two close-by flip-flops, solves those cases. > > I've found that one synchronizing flip-flop was not enough > in one particular case (from a 4-ish to 50-ish MHz domain). > Two was. Does one ever work reliably ? Or has the 'window' > become smaller in the past few years ? > > > John Kortink > Windfall Engineering > > -- > > Email : kortink@inter.nl.net > Homepage : http://www.windfall.nl > > Your hardware/software designs realised !Article: 111228
Bingo, thanks!!!!!!!!!!!!! Laurent, Yes, there is a jumper. Best Regards, Cheng AMONTEC schrieb: > Hi Cheng, > > JTAG = Boundary Scan = large shift-register (TDI TDO signals) + state > machine TAP control (TMS signal) > JTAG is a synchronous bus (TCK signal), as a SPI bus. > > If Xilinx Impact only detect one device, this means you only have one > device in the JTAG chain. > You maybe have jumpers (or 0 ohm resistors) on your board for routing > the JTAG chain. > > Regards, > Laurent Gauch > ____________________________ > http://www.amontec.com > Introducing new JTAGkey Tiny > > uvbaz wrote: > > > Thanks, Laurent > > > > Is JTAG Chain = Boundary Scan Chain? Yes, it is "Automatically > > connected to a cable and identity Boundary Scan Chain". > > > > Unfortunately, impact detect only one FPGA. Which mistakes cause this > > happen: PINs are not correctly connected? impact is not properly > > adjusted? or something else? > > > > thanks again > > > > Regards, > > Cheng > > > > > > AMONTEC schrieb: > > > > > >>uvbaz wrote: > >> > >>>hallo, > >>> > >>>i have 2 FPGAs(both Xilinx Virtex-4) on one Board, and only one > >>>Cable(Platform Cable USB). How can i configuration them under iMPact? > >>> > >>>Somebody help me!!! :) > >>> > >>>Cheng > >>> > >> > >>Using JTAG chain or using other configuration mode? > >> > >>With two Xilinx FPGAs in the same JTAG chain, the Xilinx Impact software > >>will auto-detect your both devices. Then Impact will ask to associated > >>the .bit file for each devices ... > >> > >>Regards, > >>Laurent Gauch > >> ____________________________ > >> http://www.amontec.com > >> Introducing new JTAGkey Tiny > > > >Article: 111229
"KJ" <Kevin.Jennings@Unisys.com> wrote in message news:1162303126.746950.165570@i42g2000cwa.googlegroups.com... > From the two responses it would appear that all we're talking about is > two independent data bus sizes. Address size of the various ports is a > calculated value determined from the data bus size and memory size. Yup, that's what I would assume (since nothing else makes sense :-)) >> I'm not sure why none of the synthesis tools support this, but it's true, >> they don't. I've always ended up instantiating something to get this >> behaviour. :-( > Not sure what support you think you're not getting. Memory can be > inferred from plain vanilla VHDL with synthesis tools. Data bus sizing > (and the implied address bus sizing) is a wrapper around that basic > memory structure and gets synthesized just fine...so it is supported. The *functionality* is supported, but the optimal mapping to the technology is not. Or wasn't last time I looked. If I write that plain vanilla VHDL, I have never seen a synthesis tool create an asymetrically-ported RAM from it; I always got a RAM with a multiplexor on the output (or worse, a bunch of DFFs). > If what you mean by 'not supported' is that there isn't a pre-defined > instance that you can plop down into your code and parameterize, then > you're going into the Mr. Wizard territory which leads you to vendor > specific implementations. Avoiding the vendor specific stuff is > usually a better approach in most cases. In many cases, I would agree, so long as you don't end up crippling your design's performance as a result, and spending money on silicon features that you're not going to use. After all, they were put there to help you make your design as efficient as possible (which managers usually like). Certainly making sure that vendor-specific functions are isolated in the code so they can be swapped out at will is a sensible practice. As is making a careful risk assessment whenever you consider using a feature that only one vendor or device family supports. > Presumably this is because the FPGA vendors would > rather go the Mr. Wizard path and try to lock designers in to their > parts for irrational reasons With all due respect, I think you presume too much. There are many problems with wizards and core generators for things like RAMs and arithmetic elements - mostly, they are the wrong level of abstraction for most designs. Nevertheless, IP cores from FPGA vendors serve two major purposes. Firstly, they help designers get the most out of the silicon in those cases where synthesis tools are not sophisticated enough to produce optimal results. Secondly, they allow designers to buy large blocks of standards-compliant IP - such as error correction cores, DDR controllers, and what have you - instead of designing them in-house. I'm not denying that there is a risk of vendor lock-in, but I'd dispute that it's the motivating factor for vendors to develop IP. Certainly when members of the IP development team that I belong to here at Xilinx sit down with the marketing department and discuss roadmaps, the questions that come up are always "What are customers asking for? What is good/bad about our current product? What new features do we need?", not "How can we ensnare more hapless design engineers today?". :-) Cheers, -Ben-Article: 111230
Brad Smallridge wrote: [snip] > > Another vote for National Chips if the highest frequency at SDR is 57MHz. > The Camera Link standard should go to 80MHz. > Actually 85 MHz. x 7 = 595 Mb/s per link pair. I've used the National chips at this frequency for some time now and have not found that I need to adjust skew, however we generally use fairly expensive cabling to keep the pair-to-pair skew low. I would think the pin count is the only good reason not to use the National chips. Also make sure you have adequate ESD protection. I generally don't like running external cables directly into expensive ball-grid arrays (doubly expensive to repair). Just my 2 cents, GaborArticle: 111231
Tom a écrit : > Nicolas Matringe wrote: >> Peter Alfke a écrit : >>> Rule #1: Never feed an asynchronous input into more than one >>> synchronizer flip-flop. Never ever. >> Got bitten once there. Never twice. > > Is it possible that this situation could occur if register duplication > is enabled (to improve timing) in the tools (eg XST)? In theory it could. > If so. is there a method to mark the synchronizer in HDL to ensure it > is never automatically duplicated? I am sure there is but I haven't used a Xilinx part for quite a long time. Austin or Peter (or many other readers) could give you a more accurate answer. NicolasArticle: 111232
"Nicolas Matringe" <nicolas.matringe@fre.fre> wrote in message news:45476ba0$0$3879$426a74cc@news.free.fr... > Tom a écrit : >> Nicolas Matringe wrote: >>> Peter Alfke a écrit : >>>> Rule #1: Never feed an asynchronous input into more than one >>>> synchronizer flip-flop. Never ever. >>> Got bitten once there. Never twice. >> Is it possible that this situation could occur if register duplication >> is enabled (to improve timing) in the tools (eg XST)? > In theory it could. You may be right, but I think it's unlikely. If you're re-synchronizing properly then there are two levels of FFs, and the front-end FFs have a fanout of exactly one net each. So there is nothing to be gained by duplicating them, even if the back-end stage has a high fanout (the second stage would be duplicated instead). In fact, register duplication rarely makes timing better; in fact in many high-performance pipelined designs, it can make it much worse (explanation available on demand). >> If so. is there a method to mark the synchronizer in HDL to ensure it >> is never automatically duplicated? Yes. You can use the REGISTER_DUPLICATION constraint in your source code or XCF file to specifically turn this feature on or off for a specific entity or module. Cheers, -Ben-Article: 111233
On Mon, 30 Oct 2006 11:01:34 -0800, daver2 wrote: > Kevin Neilson wrote: >> daver2 wrote: >> > I am implementing an extremely old logic design (circa 1965!) on a >> > Xilinx Virtex 4 (XC4VLX25). >> > >> > For those interested - it is the logic for the computer that flew to >> > the moon and back! The design is based on approximately 5,000 3-input >> > NOR gates and not a flip flop in sight! For the circuit diagrams see >> > website 'http://klabs.org/history/ech/agc_schematics/index.htm'. >> > >> > >> Does the design implement flops using NORs with feedback? If so, that >> could be an issue. The ISE tools don't like combinational feedback. >> -KEvin > > Kevin, > > Yes, the design does use NOR gates with feedback to create the flipflops > - and yes I am getting warnings from XST about combinatorial loops. I am > (as we speak) going through the network that the synthesiser has > generated for the 4LUT's to see that what it has generated is what > should have been generated! > > The original logic would have potentially suffered from unstable > oscillation and glitches if it had not been designed properly in the > first place so it is my belief that ISE is complaining about something > that won't occur in reality. > > Dave What about simulating the original NOR gates with a NOR gate followed by a small synchronous delay/filter (maybe a 2 bit counter dead ended at 0 and 3) to swamp out the routing delays. Then the delay tolerances could be as good as the original NOR gates. Peter WallaceArticle: 111234
Uncle Noah wrote: > Hi guys > > Nice touch Bob, I laughed my @ss off!!! > > But you are right; young boys like Jagsa donnot have attitude do they? > At least, they can disclose the name of their institute, this would > prove useful :) > > Anyway, Jaksa-boy do your homework please. If you don't feel confident > to the least, then this is a free world. You can always change > profession, you are just starting... You might even learn a craft... > > But, hey boy, your institute is paying a sh!tpile of money so that you > can have Quartus-II and not get your feet wet with open stuff like GHDL > and Signs. Give them a break and sit your @ss down. It's just a trivial > design in VHDL, it's not rocket science really. In one or two weeks of > mediocre labour, you will be done. > > That said, you could browse some nice projects here and there in the > web (well-known site: http://www.opencores.org) but that's not the > point, is it? > > BTW what's your institute, i might have to ring a few bells there ^o^ > ^O^ (+ laughing his @ss off for the entire day). > > Cheers Jacko > > > Jaksa wrote: > > I am using Altera Quartus II software. Why? I didn't expect that someone would gave me VHDL codes. I said "any kind of help would use to me". Thanks Andrew.Article: 111235
Hi Im planning to design an SPDIF receiver for implementation on Spartan 3 FPGA , But im not sure how to go about the design,Does any one have ideas ? Thank uArticle: 111236
Nice for you ;-) Regards, Laurent Gauch ____________________________ http://www.amontec.com Introducing new JTAGkey Tiny uvbaz wrote: > Bingo, > > thanks!!!!!!!!!!!!! Laurent, > > Yes, there is a jumper. > > Best Regards, > Cheng > > > AMONTEC schrieb: > > >>Hi Cheng, >> >>JTAG = Boundary Scan = large shift-register (TDI TDO signals) + state >>machine TAP control (TMS signal) >>JTAG is a synchronous bus (TCK signal), as a SPI bus. >> >>If Xilinx Impact only detect one device, this means you only have one >>device in the JTAG chain. >>You maybe have jumpers (or 0 ohm resistors) on your board for routing >>the JTAG chain. >> >>Regards, >>Laurent Gauch >> ____________________________ >> http://www.amontec.com >> Introducing new JTAGkey Tiny >> >>uvbaz wrote: >> >> >>>Thanks, Laurent >>> >>>Is JTAG Chain = Boundary Scan Chain? Yes, it is "Automatically >>>connected to a cable and identity Boundary Scan Chain". >>> >>>Unfortunately, impact detect only one FPGA. Which mistakes cause this >>>happen: PINs are not correctly connected? impact is not properly >>>adjusted? or something else? >>> >>>thanks again >>> >>>Regards, >>>Cheng >>> >>> >>>AMONTEC schrieb: >>> >>> >>> >>>>uvbaz wrote: >>>> >>>> >>>>>hallo, >>>>> >>>>>i have 2 FPGAs(both Xilinx Virtex-4) on one Board, and only one >>>>>Cable(Platform Cable USB). How can i configuration them under iMPact? >>>>> >>>>>Somebody help me!!! :) >>>>> >>>>>Cheng >>>>> >>>> >>>>Using JTAG chain or using other configuration mode? >>>> >>>>With two Xilinx FPGAs in the same JTAG chain, the Xilinx Impact software >>>>will auto-detect your both devices. Then Impact will ask to associated >>>>the .bit file for each devices ... >>>> >>>>Regards, >>>>Laurent Gauch >>>> ____________________________ >>>> http://www.amontec.com >>>> Introducing new JTAGkey Tiny >>> >>> >Article: 111237
Hi, I had a very general question. I'd like to design a low-pass filter, and I was wondering what the general lay-out for one was. I'm currently using a National Instruments FPGA module. They sell a filter design kit for $1,000 and I'm wondering if I can avoid buying it. The National Instruments FPGA I'm currently using comes with one FIR filter example which uses 4 shift registers (plus the imput) and three coefficients to filter the signal by a factor of 10 (200kHz -> 20kHz). Which in itself is strange, since the window is only 5 points wide? I need to filter it down to 200 Hz for my application. I'm afriad of programming 500 shift registers. Even if I did something clever with the FIFO, in the end, there's a lot of multiplication, which is very costly. It seems like some kind of decimation strategy is my only hope? but this is certainly not the same as filtering, and the primary objective is to reduce the noise in real time. Or is there something similar to the FFT that divides and conqueres, breaking it up into smaller parts to get it done. So I was wondering what the general strategy was for such filters.Article: 111238
Thank you for clearing this up ^_^ > > Jaksa wrote: > > > I am using Altera Quartus II software. Why? > > I didn't expect that someone would gave me VHDL codes. I said "any kind > of help would use to me". > Thanks Andrew.Article: 111239
Building a filter by yourself in a HDL is fairly simple and a good learning experience. If I were you, I would look into IIR filters especially the Butterworth variety. Also, power of two multiplication and division involves a simple shift, which will reduce the time needed to filter the signal. ---Matthew Hicks <will.parks@gmail.com> wrote in message news:1162315909.783504.284260@f16g2000cwb.googlegroups.com... > Hi, > > I had a very general question. > > I'd like to design a low-pass filter, and I was wondering what the > general lay-out for one was. > > I'm currently using a National Instruments FPGA module. They sell a > filter design kit for $1,000 and I'm wondering if I can avoid buying > it. > > The National Instruments FPGA I'm currently using comes with one FIR > filter example which uses 4 shift registers (plus the imput) and three > coefficients to filter the signal by a factor of 10 (200kHz -> 20kHz). > Which in itself is strange, since the window is only 5 points wide? > > I need to filter it down to 200 Hz for my application. I'm afriad of > programming 500 shift registers. Even if I did something clever with > the FIFO, in the end, there's a lot of multiplication, which is very > costly. > > It seems like some kind of decimation strategy is my only hope? but > this is certainly not the same as filtering, and the primary objective > is to reduce the noise in real time. Or is there something similar to > the FFT that divides and conqueres, breaking it up into smaller parts > to get it done. > > So I was wondering what the general strategy was for such filters. >Article: 111240
Frank van Eijkelenburg wrote: > Hi, > > I have a bootloader running from internal ram (m4k blocks). I also build > a standalone application to run from external ram. The application is a > .bin file which is sent to the bootloader through a serial interface > (RS232). The bootloader copies the application to external ram and > executes it. My application is built to run from address 0x00200000. If > I make a .bin file from the generated .elf file I get a file of about 2 > MB. This is because the alt_exception and alt_irq_handler is laid at > address 0x20 and 0xEC. AFAIK this is not necessary. Do I have to make a > linker file to fix this? Or should I use another startup assembly file? > In case of a linker file, does anyone have an example for this situation? > > So the bootloader runs from internal ram (with base address 0) and the > application runs from external ram (with base address 0x00200000). I'm assuming you're producing the bootloader and application as separate programs and that the bootloader isn't using interrupts. In SOPC Builder, set the reset address to 0 and set the exception address to 0x00200000. When you extract the .bin file from the elf file, exclude the .reset section so you don't get the generated code at 0x0. MarkArticle: 111241
will.parks@gmail.com wrote: > Hi, > > I had a very general question. > > I'd like to design a low-pass filter, and I was wondering what the > general lay-out for one was. Tee hee. filter .------. signal in | | signal out ---------->| |-----------> | | '------' What? This isn't helpful? That's probably because you could write a book about what to put in the 'filter' block. Common things would be an IIR linear filter or an FIR linear filter, but there are other options (including the decimation you mention later), and just "IIR or FIR" covers quite a bit of ground. > > I'm currently using a National Instruments FPGA module. They sell a > filter design kit for $1,000 and I'm wondering if I can avoid buying > it. Yes you can, but you have to know what you're doing. Ultimately you'll have to know what you're doing to really use the NI package as well, unless your capabilities far outstrip your requirements. Those sorts of packages are great for getting something working in the lab early, but without knowing what they do you can't effectively get that last 10% worth of performance that makes the difference between a product that's a disaster and a product that's a success. Of course, if you _do_ know what they do, you don't need them. "Understanding Digital Signal Processing" by Rick Lyons would be a big help, but you need answers faster, I assume. > > The National Instruments FPGA I'm currently using comes with one FIR > filter example which uses 4 shift registers (plus the imput) and three > coefficients to filter the signal by a factor of 10 (200kHz -> 20kHz). > Which in itself is strange, since the window is only 5 points wide? > > I need to filter it down to 200 Hz for my application. I'm afriad of > programming 500 shift registers. Even if I did something clever with > the FIFO, in the end, there's a lot of multiplication, which is very > costly. You aren't saying what you're filtering down _from_, but your mention of 500 shift registers (I assume you mean a 500 tap delay line) implies that you're sampling at somewhat less than 100kHz (if you're sampling at exactly 100kHz you need to read http://www.wescottdesign.com/articles/Sampling/sampling.html). I'll assume that you are sampling at 25kHz -- if you use one multiplier you'd only need to clock it at 12.5MHz, which is a pretty un-challenging clock speed. You'd still need that 500-tap delay line, however. > > It seems like some kind of decimation strategy is my only hope? but > this is certainly not the same as filtering, and the primary objective > is to reduce the noise in real time. Decimation is not filtering, but if you filter then decimate you can reduce both the necessary processing (you only have to run through that 500-tap filter once for each output sample, not once for each input sample) and have less data for following stages to slog through. Using a sinc^n filter has two advantages: it's light on processing resources, because all the 'multiplies' are by 1 or 0, and it sounds damn impressive when you throw the name at the boss. It works _very_ well in an environment where you're down sampling, because the filter has natural nulls at anything that would alias down to DC, and that's where you're usually most interested in what's going on. > Or is there something similar to > the FFT that divides and conqueres, breaking it up into smaller parts > to get it done. In theory you reach a point where it's more efficient to filter by performing an FFT on a block of data, windowing it by your desired filter function, then performing an IFFT back to the 'real world'. Rick Lyon's book goes into this. In practice this sort of optimization is very problem-dependent; using some sort of simple filter-and-decimate may take significantly less resources. > > So I was wondering what the general strategy was for such filters. > -- Tim Wescott Wescott Design Services http://www.wescottdesign.com Posting from Google? See http://cfaj.freeshell.org/google/ "Applied Control Theory for Embedded Systems" came out in April. See details at http://www.wescottdesign.com/actfes/actfes.htmlArticle: 111242
Hello, all, Does anyone know where I can get just a few (5 - 10) Xilinx XCS30-3TQ144C chips? Anybody who has them wants to sell me a minimum of a hundred or more. I just need a few to make repairs on equipment in the field. I just got a board back from a customer who had a lightning strike, and I had to salvage a chip off a test module to get his unit repaired. If anyone has a few of these chips laying around, I'd be glad to pay the going rate for them, too! I can probably use other speed ranges or temp ranges as well. I'm in the US, but that shouldn't make much difference, I seem to be buying my Xilinx chips from Australia these days! Thanks much in advance. JonArticle: 111243
aravind wrote: > Hi > Im planning to design an SPDIF receiver for implementation on > Spartan 3 FPGA , But im not sure how to go about the design,Does any > one have ideas ? > Thank u I did a quick Google search and found some very informative entries at Wikipedia. To get the low level format description I had to click through to the AES/EBU description and even more detail is available at one of the references given. This should not be a difficult design to figure out, but there are details that require attention depending on your application.Article: 111244
Jon Elson <elson@pico-systems.com> wrote: > Hello, all, > Does anyone know where I can get just a few (5 - 10) Xilinx > XCS30-3TQ144C chips? Anybody who has them wants to sell me > a minimum of a hundred or more. I just need a few to make > repairs on equipment in the field. I just got a board back > from a customer who had a lightning strike, and I had to salvage > a chip off a test module to get his unit repaired. > If anyone has a few of these chips laying around, I'd be glad to > pay the going rate for them, too! I can probably use other speed > ranges or temp ranges as well. I'm in the US, but that shouldn't > make much difference, I seem to be buying my Xilinx chips from > Australia these days! This is again a case for the Xilinx "online shop"... -- Uwe Bonnes bon@elektron.ikp.physik.tu-darmstadt.de Institut fuer Kernphysik Schlossgartenstrasse 9 64289 Darmstadt --------- Tel. 06151 162516 -------- Fax. 06151 164321 ----------Article: 111245
Todd wrote: > Hi all > > I'm a design engineer trying to evaluate the large number of > possibilities for adding Ethernet to our embedded system. > > Should I try to put the MAC inside the FPGA and just use an external > PHY? For gigabit ethernet, you might consider the low-cost LatticeECP2M FPGA, which has 3.125Gbps SERDES and advanced DSP blocks built in. http://www.latticesemi.com/products/fpga/ecp2/index.cfm In addition, for a 32-bit soft processor, check out the LatticeMico32, which is open source and free. http://www.latticesemi.com/products/intellectualproperty/ipcores/mico32/index.cfm Hope this helps, Bart Borosky, LatticeArticle: 111246
Ben Jones wrote: > > Presumably this is because the FPGA vendors would > > rather go the Mr. Wizard path and try to lock designers in to their > > parts for irrational reasons > > With all due respect, I think you presume too much. Perhaps I do, since I don't work for the FPGA vendors I can only speculate or presume. > There are many problems > with wizards and core generators for things like RAMs and arithmetic > elements - mostly, they are the wrong level of abstraction for most designs. Maybe. I find there lack of a standard on the 'internal' side to be the bigger issue. > Nevertheless, IP cores from FPGA vendors serve two major purposes. Firstly, > they help designers get the most out of the silicon in those cases where > synthesis tools are not sophisticated enough to produce optimal results. I agree, they are good at that. I don't believe that a unique entity is required in order to produce the optimal silicon. Once the synthesis hits a standardized entity name it would know to stop and pick up the targetted device's implementation. > Secondly, they allow designers to buy large blocks of standards-compliant > IP - such as error correction cores, DDR controllers, and what have you - > instead of designing them in-house. And just exactly which standard interfaces are we talking about? DDRs have a JEDEC standard but the 'user' side of that DDR controller doesn't have a standard interface. So while you take advantage of the IC guy's standards to perform physical interfaces, you don't apply any muscle to standardizing an internal interface. The ASIC guys have their standard, Wishbone is an open specification, Altera has theirs, Xilinx has theirs.....all the vendors have their own 'standard'. Tell me what prevents everyone from standardizing on an interface to their components in a manner similar to what LPM attempts to do? The chip guys do it for their parts, the FPGA vendors don't seem to want to do anything similar on the IP core side. This doesn't prevent each company from implementing the function in the best possible way, it simply defines a standardized interface to basically identical functionality (i.e. it turns read and write requests into DDR signal twiddling in the case of a DDR controller). Can you list any 'standard' function IP where the code can be portable and in fact is portable across FPGA vendors without touching the code? Compression? Image processing? Color space converters? Memory interfaces? Anything? All the vendors have things in each of those categories and each has their own unique interface to that thing. > > I'm not denying that there is a risk of vendor lock-in, but I'd dispute that > it's the motivating factor for vendors to develop IP. I was only suggesting that it was an incentive...which you seem to agree with. KJArticle: 111247
The user community "pressures" the FPGA (and other IC) evndors to come up with better and cheaper solutions. That's called progress. We love it! We respond with new and improved chip families. We get some help from IC processing technology, i.e. "Moore's Law", especially in the form of cost reduction, and a little bit of speed improvement (less with any generation). We also have to fight negative effects, notably higher leakage currents. Real progress comes from better integration of popular functions. That's why we now include "hard-coded" FIFO and ECC controllers in the BlockRAM, Ethernet and PCIe controllers, multi-gigabit transceivers, and microprocessors. Clock control with DCMs and PLLs, as well as configurable 75-ps incremental I/O delays are lower-level examples. These features increase the value of our FPGAs, but they definitely are not generic. If a user wants to treat our FPGAs in a generic way, so that the design can painlessly be migrated to our competitor, all these powerful, cost-saving and performance-enhancing features (from either X or A) must be avoided. That negates 80% of any progress from generation to generation. Most users might not want to pay that price. And remember, standards are nice and necessary for interfacing between chips, but they always lag the "cutting edge" by several years. Have you ever attended the bickering at a standards meeting?... Cutting edge FPGAs will become ever less generic. That's a fact of life, and it helps you build better and less costly systems. Peter Alfke ===========Article: 111248
gallen wrote: > I consider myself to still be a youngster. I'm only 24 years old and > I'm relatively recently out of college, but I find nothing you mention > here foreign. This stuff is still being taught in schools (though I > might argue my school didn't do a great job of it). The reality of it > all is that low level electronics remains useful. I have never once > regretted understanding how a transistor works. I have recently been > looking a flip flop designs since my company was having a hard time > meeting timing. While I'm not an expert, others are, and I've yet to > ever meet a person who know this kind of stuff and doesn't want to > share that knowledge. > > The kinds of things I deal with on perhaps a monthly basis are: > * What are the costs of transmission gate input flip flop versus a > cmos input? > * Can Astro synthesis a 4GHz clock tree? > * How much drive would it take to overpower the drive of another cell > (multiple outputs tied together)? > * What are possible resolve states when you have a race on an async > set/reset flop? > > People still have to solve these problems. They aren't going away. > The younger engineers still face these. > > Now I admit that I do work as an IC designer, but ICs are here to stay. > They may become fewer, but as long as they exist and get more > complicated, plenty of people will be employed in that industry. > > My point to add to this is that many older engineers have difficulty > grasping new ways of operating. Convincing experienced engineers that > synthesis tools actually work can be like pulling teeth sometimes. > Just the other day, some engineers were ranting about some code that a > contractor wrote that was very very behavioral. They were complaining > about how that was killing timing and adding 10s to 100s of levels of > logic. They hadn't tried it out. I ran it through the synthesizer and > it was *faster* than the low level code. > > I don't see knowledge of the really low level stuff going away. In > fact I see it increasing. Things like quantum physics and maxwell's > equations are getting used more and more to make electronics work. > TCAD engineers live in this realm and TCAD is getting used more and > more for things like process definition and modeling. What I see > happening is the rift between the low level process/cell designers and > the logic designers growing as the logic designers get more high level > and the process/cell designers have to get closer to the real physics > of the system. Not all of the knowledge is necessary for all parties. > The fact is that if a good library is present (and nothing super funky > is in the design), a logic designer doesn't need to know electronics. > They simply need to know the how to work with the models that are > employed by the tools. > > -Arlen > I have no problem using synthesis tools, although I have a healthy skepticism of *any* software based tool (which does not mean I won't use it or it's conclusions, merely that all non-trivial software has bugs). As to a logic designer not needing to know the electronics; that's only true if said designer is only designing for where synthesis (or models that will be used) happens to be available. I've yet to see an LSI or larger device where the IO pins could be directly attached to 48V, (and be cheaper than the discrete alternative) yet that is a pretty standard logic design issue in some industries. A POR indicator circuit for a 24V vehicle, for instance, could be constructed from standard cells, but at some point we meet the (very nasty) 24V system (which can go up to 80V during load dump and droop regularly during engine cranking). Of course, when designing power supplies (which I also do quite regularly) I expect those sorts of challenges. Incidentally, the 'logic designer' syndrome you mention is precisely what I was railing against in an earlier post; it's shortsighted and foolish. A logic designer that can do logic but not electronics is _not_ an electrical/electronics engineer - they are either a software engineer or a mathematician. Kudos to you for learning the low level parts. As with Peter Afke, I too am very particular about who we hire. Generally I would prefer not to hire anyone than hire someone who doesn't have the urge to seek out answers and think for themselves. I am fully aware that such people _will_ make mistakes (it's an occupational hazard) but I would prefer that to hand-holding. I worry about too few people who call themselves electrical/electronic engineers actually knowing sufficient about physical layer engineering. Cheers PeteSArticle: 111249
Peter Alfke wrote: > Real progress comes from better integration of popular functions. > That's why we now include "hard-coded" FIFO and ECC controllers in the > BlockRAM, Ethernet and PCIe controllers, multi-gigabit transceivers, > and microprocessors. None of that is precluded, I'm just saying that I haven't heard why it could not be accomplished within a standard framework. Why would the entity (i.e. the interface) for brand X's FIFO with ECC, Ethernet, blah, blah, blah, not use a standard user side interface in addition to the external standards? Besides facilitating movement (which is not the only concern) it promotes ease of use in the first place. > Clock control with DCMs and PLLs, as well as > configurable 75-ps incremental I/O delays are lower-level examples. I agree, those are good examples of some of the easiest things that could have a standardized interface....although I don't think you really agree with my reading of what you wrote ;) > These features increase the value of our FPGAs, but they definitely are > not generic. I said standardized not 'generic'. I was discussing the interface to that nifty wiz bang item and saying that the interface could be standardized, the implementation is free to take as much advantage of the part as it wishes. > > If a user wants to treat our FPGAs in a generic way, so that the design > can painlessly be migrated to our competitor, all these powerful, > cost-saving and performance-enhancing features (from either X or A) > must be avoided. That negates 80% of any progress from generation to > generation. Most users might not want to pay that price. My point was to agree on a standard interface for given functionality not some dumbed down generic vanilla implementation of that function. To take an example, and using your numbers, are you suggesting that the performance of a Xilinx DDR controller implemented using the Wishbone interface would be 80% slower than the functionally identical DDR controller that Xilinx has? If so, why is that? If not then what point were you trying to make? > > And remember, standards are nice and necessary for interfacing between > chips, but they always lag the "cutting edge" by several years. I don't think any of the FPGA vendors target only the 'cutting edge' designs. I'm pretty sure that most of their revenue and profit comes from designs that are not 'cutting edge' so that would give you those 'several years' to get the standardized IP in place. > Have > you ever attended the bickering at a standards meeting?... > Stop bickering so much. The IC guys cooperate and march to the drumbeat of the IC roadmap whether they think it is possible or not at that time (but also recognizing what the technology hurdles to get there are). There is precedent for cooperation in the industry. > Cutting edge FPGAs will become ever less generic. Again, my point was standardization of the entity of the IP, not whether it is 'generic'. > That's a fact of life, and it helps you build better and less costly > systems. But not supported by anything you've said here. Again, my point was for a given function, why can't the interface to that component be standardized? Provide an example to bolster your point (as I've suggested with the earlier comments regarding the Wishbone/Xilinx DDR controller example). KJ KJ
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z