Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
What do you mean by "bus like"? Are you in fact connecting independent IO buses together? And if so, is there a specific bus that you are using (for example PCI)? What are the functional requirements? Does it need to recover from errors? Is it a data only interface? Is there some form of flow control? What is the size of the payload (fixed, variable, etc.)? Are the modules in a common system with a single power supply and ground? Or, is the connection between cabinets with different power systems? More details on the problem would enable more specific/helpful responses. TC "Markus Meng" <meng.engineering@bluewin.ch> wrote in message news:aaaee51b.0209230623.700eae42@posting.google.com... > hi all, > > has anybody done this using spartan-ii with - for example - > an LVDS interface? > > What I require is a fast serial - bus like - connection between > 3 or 4 electronic modules. Those modules are close to each other > but in separate cabinets. I would like to use - let's say - RJ11 > connector and cabling, the speed should be ~ 100 Mbps > > markusArticle: 47376
Tim wrote: > BTW, was there ever silicon of a 1000 series, before > the 2064? No, 2064, (followed by 2018, then 3020...) was the first Xilinx FPGA. Peter AlfkeArticle: 47377
Austin Lesea wrote: > > Marc, > > I will reiterate something Peter has said once before: Altera has > announced that they will have (note use of the future tense) ....... > > So comparing an exisiting Spartan IIE product that is selling like > crazy today (and is the fourth generation Spartan Product) with a > product that doesn't exist yet at a future technology node is a little > silly. Of course, but it is an almost universal marketing reflex. Considering the technical skills of the customer base, and all the credit given to 'listening to the customers', you would expect semiconductor company press releases to read less like soap powder adverts.... Digging about on Altera's releases, we can get a time-line ( now, they could have presented a time line, but you never see that in SoapPowder 101 ) Cyclone time line: Beta SW : Quartus II version 2.1, July 2002 Software Support : Now, Quartus II version 2.1 (service pack 1) Programming : Programming file generation for Cyclone devices will be supported in a subsequent software release Beta Samples : ?? Eng Samples : E.S. Cyclone EP1C20 and EP1C6 January and February 2003, respectively. E.S EP1C3 and EP1C12 in April 2003. Release : All family members will be in full production in the first half of 2003. Process : 1.5V, 0.13µm, all-layer-copper process from TSMC. Clearly not in general release right now, but not 100% vaporware as 'a product that doesn't exist yet at a future technology node' suggests. 0.13u is not 'future technology', and they will have some silicon in their LABs. ( and probably at key customers ) Looking thru Altera's info, the configuration memory looked to have taken a significant step. -jgArticle: 47378
Hello, all, I am working a design with the Xilinx Spartan IIe, using ISE 4.2 sp3, Sinplicity 7.1 for synthesis. My design is relatively minimally constrained and meets static timing. In my report I see 8 levels of logic associated with my system clock. Today, I removed some unused logic, and reduced the length of some shift registers in my design, dropping about 32 flip-flops. Now the design fails to make static timing and the number of levels of logic associated with my clock has gone up to 12! What is at work here? Is the synthesis tool shooting me in the foot, or is it in the Xilinx tool, or my constraints. Advice would be greatly appreciated. ClydeArticle: 47379
Jon Elson wrote: >>Peter Alfke wrote: >>>After testing devices for millions of device hours at 125 degr C, the >>>calculated failure rate at Tj = 55 degr C is between 5 and 30 FIT, where >>>one FIT = one failure per billion device hours. Jon Elsom wrote: > No, not millions of hours, millions of DEVICE HOURS. So, there are 8172 > hours a year, and if you put 1000 devices in a test chamber for a year, you > have > 8.172 million device hours. That is a pretty good test of running the chips > at that > temperature, internal power dissipation, etc. Yikes, that "extrapolation" assumes that a chip failure is some sort of Poisson process. In other words, at any instant in the lifetime of any part, the probability of a "failure" is the same as at any other instant. This is obviously not so for bicycles and rocket motors, but is it true for an ASIC? Is it really true that a 5year old chip is as likely to fail this minute as a 5day old chip? On the other hand, what we are calling a failure, here, is not the sort of thing we would call a failure in a bicycle or a rocket motor: a failed bicycle goes crash, a failed rocket motor goes boom, but a failed FPGA goes on. Perhaps the right word is not "failure", but "error?" Or is my ignorance showing? -- Steve Williams "The woods are lovely, dark and deep. steve at icarus.com But I have promises to keep, steve at picturel.com and lines to code before I sleep, http://www.picturel.com And lines to code before I sleep." abuse@xo.com uce@ftc.govArticle: 47380
Is there any source for the discontinued Xilinx XC6200 series FPGAs? -jimArticle: 47381
td@emu.com (Tony Dean) wrote in message news:<33aa9b10.0209240950.6573a03b@posting.google.com>... > Pierre-Olivier Laprise <plapri@tesserae.McRCIM.McGill.EDU> wrote in message news:<_Zpj9.624$%C6.203154@charlie.risq.qc.ca>... > > > This procedure is in fact documented in many places that speak > > of the configuration procedure, amongst others in the Spartan-II > > data sheet under the heading "Initiating Configuration", right > > next to a flow chart describing the proper configuration flow. > > You are absolutely right. I don't know how I could have missed that. > > Actually, I do: I was reading the section in the configuration > procedure under "Boundary Scan Mode" (JTAG) which contains misleading > phrases such as "configuration being done entirely throught the TAP > (JTAG port)", and "configuration and readback via the TAP are always > available", and has a step-by-step procedure that does not mention the > PROGRAM pin. > > Nevertheless, I plead guilty to not being observant enough, and I > hereby acquit Xilinx of the heinous crime of docu-negligence, although > I submit that much future grief could be averted if in the JTAG > configuration documentation a simple clause was added, "... and of > course, one must pulse the PROGRAM line low before reconfiguring the > device via JTAG." > > Humbly, > -td Is it not possible to use the SHUTDOWN command with the Spartan-II? I looked through the Spartan-II datasheet and XAPP176, and it doesn't really talk about it. I know this works with Virtex-E. I have been using it for months. The only problem I had was when I installed new software and it set the disable reconfiguration and readback flag. Alan Nishioka alann@accom.comArticle: 47383
Howdy Austin, Austin Lesea <austin.lesea@xilinx.com> wrote in message news:<3D90A1BD.46C477BD@xilinx.com>... > Marc, > > I will reiterate something Peter has said once before: Altera has announced > that they will have (note use of the future tense) ....... > > So comparing an exisiting Spartan IIE product that is selling like crazy > today (and is the fourth generation Spartan Product) with a product that > doesn't exist yet at a future technology node is a little silly. > > Also stating their projected future price per LUT is better than the existing > price per LUT is silly. Of course: Moore's Law if they haven't totally > missed the boat (which they are definitely smart enough not to do). Howdy Austin, Perhaps I didn't write my post clearly. I have no clue what Altera will be charging per LUT, nor do I have any information on the devices to compare them to Spartan IIE's - and I don't think I came close to referencing either item in my post. I was simply using Rick's observations as a spring-board to say that I think this direction (higher LUT to I/O ratio) is overdue, and gave but two examples. Truth be told, we ran into this on two XC2V3000 designs and at least two, if not three XCV600E designs. An XC2V1000 design is nearly in the same boat. The point I was trying to make was that if we (a very small company) have run into this 4 or 5 times in the past 2 years, I suspect there are alot more out there just like us. > Virtex II Pro is the ninth generation FPGA product. And it is quite an impressive product, as were previous families of Xilinx parts when they arrived. It is truely a marvel that so much can be done with such a small, cost effective device. The amazing part is that even more could be done with them if some BRAM's were traded for LUT's, thereby upping the LUT to I/O ratio! ;-) > Maybe we have been doing this for awhile, and maybe we have figured out what > sells, and what works? Yes, overall I think you have. Sales of the Virtex-II are proving this out. But be careful about repeating that too much. There are at least two major telecom silicon vendors out there that fell into that exact trap and are now without some key products. The president of one of the companies (one you do business with, in fact) is irrate with his product management staff over it. > Obviously, we can't please absolutely everyone. I agree, and would take it a step further: you almost don't want to please everyone. If you do even think about trying to please everyone, it means you are making too many compromises, which will force the masses to pay for things they don't want or need. That creates and opening for a competitor. > I appreciate the comments, > and we always listen to what the customer wants. That is great to hear - both as an engineer and an investor. Best regards, MarcArticle: 47386
meet PCI 2.2 specArticle: 47387
Hi Reala, comp.lsi.cad has the most traffic regarding IC layout But most of it seems to be centered aroung the program 'magic'. Still at it, eh? ;) SH7 On Fri, 20 Sep 2002 09:41:08 +0800, "Reala" <-> wrote: >Hi, > >Sorry that I ask the question about IC layout, but I cannot find any >newsgroup about IC layout. >Sorry for any inconvinence. > >Anyone can tell me some homepage or newsgroup talk about IC layout? >Thank you. > >Reala > > >Article: 47388
Blackie, you are exactly right! (Please don't think I'm flaming you, its not intended) The ASICs I make though don't fit that category. To justify the cost, they are 4 to 10 clock domains, fully synchronous within each domain, but very asynchronous, dissimilar. Gate counts; well its rare to be under 1 million but they range from 500K to 10 million gates. Embedded risk processors, embedded rams and ROMs multiple large FIFOs and analog blocks; a variety of PHYs etc. are typical. Then you have the test circuitry to add even more timing complexity. Timing closure is not automatic, and, yes we earn the money, and the customers are usually very happy. because 10 million of these were still cheaper than 20 to 30 million FPGA's. that it would have taken for FPGA's assuming we could have reached the required speeds in an FPGA. Yes we use FPGA's too, mostly for prototyping and proof of concept. Yes for the smaller designs they have a very valid place in the business. But I haven't seen them hit the speeds we need with the density we need. Nor can they satisfy the gate counts required ... yet. Blackie Beard wrote: > I'm not sure about the first statement, since if I had a chip with > custom DSP, custom DPLL, and say, 50-100K gates, and 1 > clock, and I was going to sell 10 Million of them, and it only used > 1 clock, why would I not make an ASIC? We shouldn't need to > break out the calculator to make my point. > > Also, the design should take into place the transition between > multiple domains, and force synchronization between them. > I suppose if you couldn't use FIFO bucket for data transfer > between two domains, then you'd have big timing concerns, > because depending upon a race condition not occuring would > be just plain old hokey. > > BB > > ====================================================== > > >>Anything complicated enough to make an ASIC out of would tend to have >>multiple clock domains, etc. It would tend to be synchronous within each >>domain, but if "timing closure is nearly automatic" were true then why >>do we at Tality get paid piles of dosh to do the layout for so many >>designs? >> >> > > >Article: 47389
Trcd (IIRC) varies same vs/ different bank. (?? Been a long time since I read the data sheet) eg, RAS(0)->CAS(3) will have one timing, while RAS(0)->CAS(0) *MAY* have a different timing. FWIW, you'll get higher bandwidth if you do half of the convoloution on the way in, the other half on the way out. I wound up chopping it into 4 steps to maximise data transfer. SH7 On Tue, 24 Sep 2002 09:24:26 -0400, "A. Nelson" <anelson@NS.lumenera.com> wrote: > >Why am I writing across the columns? I'm trying to rotate a rather large >image, 90 degrees. This way, when the data is read out of the RAM, it can >be read out "normally", and will be rotated. > >Because the image is large, I really don't have the space to buffer even a >couple of lines (I could probalby buffer one, but I really don't want to). > >Just one more question about SDRAMs: >Trcd is listed as the minimum time between Active and Read/Write. Now, is >this between ANY Active and Write, or between Active for a specific row/bank >and a write to that same row/bank? For example, if I do the 4 consecutive >Acitves (which would be for banks 0 to 3 consectutively), followed by 4 >writes (again, for banks 0 to 3 consectutively), do I have to wait Trcd >between the LAST Active and the FIRST Write, or Trcd between the FIRST >Active and FIRST Write? > >Thanks for everyone's input, >A. > >> -- >> >> Rick "rickman" Collins >> >> rick.collins@XYarius.com >> Ignore the reply address. To email me use the above address with the XY >> removed. >> >> Arius - A Signal Processing Solutions Company >> Specializing in DSP and FPGA design URL http://www.arius.com >> 4 King Ave 301-682-7772 Voice >> Frederick, MD 21701-3110 301-682-7666 FAX > >Article: 47390
Stephen Williams wrote: > J > Yikes, that "extrapolation" assumes that a chip failure is some > sort of Poisson process. <snip> There is a heap of literature on this subject, and I don't think we can explore it all in this newsgroup. There are bath-tub curves, there are Arrhenius models, etc. Of course it is irrelevant whether a one-chip design lasts one million years or longer, but the various industries, especially the telecom and military guys as well as consumer folks, have developed pretty sophisticated models that they are reasonably happy with. Let's use them. Peter Alfke, Xilinx ApplicationsArticle: 47391
Marc, I talked with Austin about your previous posting (of this morning). It is true that the Virtex families emphasize high I/O count, but that is driven by market demand. One of the few painless modifications we can make at any time to our product offerings is device/package matching. Within reason and with some physical limitations, we can put "any die in any package". The actual offered combinations really depend on demand from our dear customers. Unfortunately, many have stayed away from the low-pin-count offerings that you like so much. And yes "we listen to our customers". Too much or not enough? Peter Alfke, Xilinx ApplicationsArticle: 47392
Jim, we at Xilinx cannot help you. Our XC6200 shelves are bare. Most chips went to universities and to one (or two) small-volume commercial design. The family died for lack of commercial interest. No sales, no money, no further development, and then no production. Sorry not to be able to help you. Peter Alfke ============== Jim Lyke wrote: > Is there any source for the discontinued Xilinx XC6200 series FPGAs? -jimArticle: 47393
In article <15881dde.0209241838.636a1e49@posting.google.com>, Marc Randolph <mrand@my-deja.com> wrote: >And it is quite an impressive product, as were previous families of >Xilinx parts when they arrived. It is truely a marvel that so much >can be done with such a small, cost effective device. The amazing >part is that even more could be done with them if some BRAM's were >traded for LUT's, thereby upping the LUT to I/O ratio! ;-) Actually, no. If you look at a Xilinx die, you realize that the memory is a very small fraction. You can really see this on the Virtex E die photo, the BlockRAMs are only about 2x the width of a CLB column http://www.xilinx.com/company/press/products/images/large/virtexedie.jpg This is one of the reasons why progressive parts are more memory heavy: it really is comparatively small, but when you want the memory, you REALLY want the memory, making it high value. -- Nicholas C. Weaver nweaver@cs.berkeley.eduArticle: 47394
Marc Randolph wrote: > The amazing > part is that even more could be done with them if some BRAM's were > traded for LUT's, thereby upping the LUT to I/O ratio! ;-) Please, no. Especially in the larger parts, please more block RAM, not less. As parts get larger, and designs get faster, moderate sized internal buffers have been and will continue to be very useful, at least for designs I've done. Last big project I did used 100% of the block RAMS and a bunch of LUTS as RAMS as well. A key point is that FPGAs are used for lots of different sorts of designs, and they can never be a close fit for all but a tiny subset of these designs. There will always be some users asking for more A and less B and other users asking for more B and less A. Some designs need one clock tree, why are you wasting resources making more? Others need 5, or 8, or more. Some need more LUTS. Some will need more RAM. I have no past, current or known future use for multipliers, yet I'd suspect that a bunch of DSP guys would whine loudly if you took the multipliers out. The one thing I'd ask for is some programmable delays for a fraction of the pins for a 90 degree phase shift for the DQS for DDR RAMS. About 1 in 4 IOBs or 1 in 8 IOBs would be just fine, thank you. Do the Altera Cyclone parts have something like this? -- Phil HaysArticle: 47395
akineko@pacbell.net (Aki Niimura) writes: > Hi, > > We tried to install the ISE5.1i on our Sun/Solaris 7 server which has > 4GB > memory. We are using ISE4.2i (SP3). > > We got the following error message. > > % ./setup > ld.so.1: /foo/xilsetup: fatal: librt.so.1: version `SUNW_1.2' not > found (required by file /foo/xilsetup) > Killed I haven't received 5.1 yet, but it appears that you need to set LD_LIBRARY_PATH to point to the directory where librt.so.1 resides. LD_LIBRARY_PATH is usually set by sourcing the settings file. But you can do find /foo -name librt.so.1 If the file is location in /foo/libs you then do: LD_LIBRARY_PATH=/foo/libs /foo/xilsetup I'm assuming a sh compatible shell here. If there more missing libraries you have to repeat the procedure and add each directory to the LD_LIBRARY_PATH separated by :. Petter -- ________________________________________________________________________ Petter Gustad 8'h2B | ~8'h2B http://www.gustad.com/petterArticle: 47396
"Bret Wade" <bret.wade@xilinx.com> wrote in message news:3D8FB6EC.A33F615F@xilinx.com... > This RPM pack error has been investigated and found to be a bug in the mappers > pack code. A fix is scheduled for 5.1i service Pack 3 which will become > available in December, 2002. Thank you! BTW I have just received 5.1i. (Compared to 4.2i, on one P&R, effort 5, on a non-trivial floorplanned datapath, the critical path and path delay was identical, down to the ps, even though two net delays changed!) In my review of the 5.1i materials, I have just come across the RPM_GRID notion, and this app note, written by Mr. Wade: "Using an RPM Grid Macro to Control Block RAM-to-FF Timing" http://www.xilinx.com/xapp/xapp416.pdf Very interesting. Has anyone here in comp.arch.fpga land been using RPM_GRID, and can you share your experiences with us? See also the recent RPM techXclusive, http://www.xilinx.com/support/techxclusives/RPMs-techX30.htm. Thanks, Jan Gray, Gray Research LLCArticle: 47397
Marc Randolph wrote: > > rickman <spamgoeshere4@yahoo.com> wrote in message news:<3D8F2B52.9DA16C60@yahoo.com>... > > > After taking a quick look at the Cyclone family, I can see they are > > taking a slightly different approach than Xilinx. > > > > The Xilinx Spartan II series is based on the Virtex family just as the > > Spartan was based on the XC4000. In contrast, it looks like the Cyclone > > family is not directly based on any existing Altera family. > > > > Comparing the Spartan II to the Cyclone shows that the Cyclone has a > > higher LUT to IO ratio. The Cyclone looks a little more like a > > potential future Spartan III based on the Virtex II family which also > > has a high LUT to IO ratio. > > > > SpartanII 2S50E 2S100E 2S150E 2S200E 2S300E > > LUTs 1536 2400 3456 4700 6144 > > IOs 182 202 263 289 329 > > ratio 8 12 13 16 19 > > > > Cyclone EP1C3 EP1C6 EP1C12 EP1C20 > > LUTs 2910 5980 12060 20060 > > IOs 104 185 249 301 > > ratio 28 32 48 67 > > > > As it turns out this makes the Cyclone parts very expensive in high IO > > count applications. Further, Altera seems to not have small chip scale > > packages for the low end of the family. Looks like they really tried to > > go for a low price, limited options product line, even more so than the > > Spartan II. The high LUT count may prove a benefit for some > > applications though. > > From what I've seen, the last 3 or so generations of Altera devices > appear to have higher LUT/IO ratios than the comparible Xilinx family. > In general, at least in the telecom industry where I am, I think this > is the right direction to be moving and Xilinx appears to have finally > figured this out, from the looks of the proposed Spartan III. > Undoubtly, the problem is identifing exactly what features and what > packages to offer for given LUT ranges. > > We are constantly bumping up against the largest device in a cost > effective package. IE, we definitely would move to using a larger > Virtex-II (XC2V4000) if it were available in anything smaller than an > expensive 1152 pin flip chip package [or the monster 40x40 1.27 mm BGA > package]. I realize Xilinx probably has their reasons for the > offerings they have (most likely thermal?), but if they could overcome > that, it'd be easy money for them, cause I'll bet there are others out > there in the same boat as us. > > Same went for the Virtex-E... the 600E was the largest you could get > in a reasonable cost/size package. The XCV812EM was available, but > not competitively priced. We would have used a large device if Xilinx > had offered it in a FG676. Instead, in both the Virtex-II and > Virtex-E, we get to play roulette with MAP and PAR, constaintly > bumping up against random timing violations that differ from run to > run. Package alone is not likely to be a dominant cost factor : Far more relevent, will be die area, yield, testing times, FAB run volumes, ( mask amortise) plus the M Squared fudge factor ( M^2 = Marketing Margin ) ( see also other thread on higher end CPLD price-kick ) That said, it makes sound sense to offer a broad range of die in a common package - designs NEVER get smaller as they mature :) A good example, that proves this can be done, is the Actel ProASIC - they offer ALL die, from 75K to 1000K, in a PQFP208 package ( 7 steps ). If you really need IO, they also have a FBGA1152 on the biggest device. -jgArticle: 47398
What are your speed requirements that FPGAs won't hit? Many ASIC apps are achievable with a proper FPGA design (note that such a design is usually much more heavily pipelined and is floorplanned, a direct port of your ASIC code is going to be slow in an FPGA). Your density numbers are certainly high, and I think well above the industry average. bulletdog7 wrote: > Blackie, you are exactly right! (Please don't think I'm flaming you, > its not intended) The ASICs I make though don't fit that category. To > justify the cost, they are 4 to 10 clock domains, fully synchronous > within each domain, but very asynchronous, dissimilar. Gate counts; > well its rare to be under 1 million but they range from 500K to 10 > million gates. Embedded risk processors, embedded rams and ROMs > multiple large FIFOs and analog blocks; a variety of PHYs etc. are > typical. Then you have the test circuitry to add even more timing > complexity. Timing closure is not automatic, and, yes we earn the > money, and the customers are usually very happy. because 10 million of > these were still cheaper than 20 to 30 million FPGA's. that it would > have taken for FPGA's assuming we could have reached the required speeds > in an FPGA. > > Yes we use FPGA's too, mostly for prototyping and proof of concept. > Yes for the smaller designs they have a very valid place in the > business. But I haven't seen them hit the speeds we need with the > density we need. Nor can they satisfy the gate counts required ... yet. > > Blackie Beard wrote: > > > I'm not sure about the first statement, since if I had a chip with > > custom DSP, custom DPLL, and say, 50-100K gates, and 1 > > clock, and I was going to sell 10 Million of them, and it only used > > 1 clock, why would I not make an ASIC? We shouldn't need to > > break out the calculator to make my point. > > > > Also, the design should take into place the transition between > > multiple domains, and force synchronization between them. > > I suppose if you couldn't use FIFO bucket for data transfer > > between two domains, then you'd have big timing concerns, > > because depending upon a race condition not occuring would > > be just plain old hokey. > > > > BB > > > > ====================================================== > > > > > >>Anything complicated enough to make an ASIC out of would tend to have > >>multiple clock domains, etc. It would tend to be synchronous within each > >>domain, but if "timing closure is nearly automatic" were true then why > >>do we at Tality get paid piles of dosh to do the layout for so many > >>designs? > >> > >> > > > > > > -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759Article: 47399
"Each Cyclone device is equipped to interface with DDR SDRAM and FCRAM devices using optimized I/O pins as seen in Figure 1. Each I/O bank features two sets of interface signal pins and each set contains a single data strobe (DQS) pin and eight associated data (DQ) pins. These pins are designed for high-speed data transfer with an external memory device using the SSTL-2 Class II I/O standard. Up to 48 DQ pins are available per device with 8 corresponding DQS pins, supporting a single dual-inline memory module (DIMM) with 32-bit data and error correction." http://www.altera.com/products/devices/cyclone/features/cyc-ext_mem_int.html Should answer the question below. > The one thing I'd ask for is some programmable delays for a fraction of > the pins for a 90 degree phase shift for the DQS for DDR RAMS. About 1 > in 4 IOBs or 1 in 8 IOBs would be just fine, thank you. Do the Altera > Cyclone parts have something like this? - DS "Phil Hays" <SpamPostmaster@attbi.com> wrote in message news:3D913E3F.C4E82D64@attbi.com... > Marc Randolph wrote: > > > The amazing > > part is that even more could be done with them if some BRAM's were > > traded for LUT's, thereby upping the LUT to I/O ratio! ;-) > > Please, no. Especially in the larger parts, please more block RAM, not > less. > > As parts get larger, and designs get faster, moderate sized internal > buffers have been and will continue to be very useful, at least for > designs I've done. Last big project I did used 100% of the block RAMS > and a bunch of LUTS as RAMS as well. > > A key point is that FPGAs are used for lots of different sorts of > designs, and they can never be a close fit for all but a tiny subset of > these designs. There will always be some users asking for more A and > less B and other users asking for more B and less A. Some designs need > one clock tree, why are you wasting resources making more? Others need > 5, or 8, or more. Some need more LUTS. Some will need more RAM. I > have no past, current or known future use for multipliers, yet I'd > suspect that a bunch of DSP guys would whine loudly if you took the > multipliers out. > > The one thing I'd ask for is some programmable delays for a fraction of > the pins for a 90 degree phase shift for the DQS for DDR RAMS. About 1 > in 4 IOBs or 1 in 8 IOBs would be just fine, thank you. Do the Altera > Cyclone parts have something like this? > > > -- > Phil Hays
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z