Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Never mind. It's a silly mistake on my part. I had RST tied to RST pin on the chip. It should be tied to GND. Deh..... "charles" <czheng@ieee.org> wrote in message news:d2fa3f25.0406071851.5a94c50b@posting.google.com...Article: 70201
Nicolas Matringe wrote: > > Peter Sommerfeld a écrit: > > Hi folks, > > > > Can anyone recommend an SDRAM controller, free or otherwise, with the > > following features: > > - synthesizable to >100 MHz fmax on Stratix -7 (preferably 133 MHz) > > - allows latent read bursts to maximum throughput > > - burtsts efficiently (keeps bank rows open where possible) I know that it takes time to design your own module and debug it. But an SDRAM controller is actually very easy. I have built them before and did not have *any* trouble with it other than a bug in my state machine that would create a hang if the local bus (not the memory bus) was operated in a way I did not expect. Of course, we were only running at a slow rate. But most of the problems you will have with 133 MHz will be external to the chip, not internal. It is not hard to design logic that has a pipeline delay of 7.5 ns if you are at all careful about your design. And memory controllers almost have to be pipelined if you want speed. If you can't find a core you like, I can guaranty a simulated design in a week once the specs are fully nailed down. -- Rick "rickman" Collins rick.collins@XYarius.com Ignore the reply address. To email me use the above address with the XY removed. Arius - A Signal Processing Solutions Company Specializing in DSP and FPGA design URL http://www.arius.com 4 King Ave 301-682-7772 Voice Frederick, MD 21701-3110 301-682-7666 FAXArticle: 70202
Peter Sommerfeld wrote: > > Yes I emailed yesterday morning ... waiting on reply. > > -- Pete > > > > > Why not ask the authors? > > > > /RogerL I don't know if you will get a speedy reply. Rudolf Usselmann has contributed a lot of cores and I expect he is getting a lot more questions than he can answer and still work full time. I believe these are cores that his company has paid to develop and they expect to get some consulting income from supporting them, but I doubt that they ask money to answer a simple question, I just don't expect them to be quick about answering. -- Rick "rickman" Collins rick.collins@XYarius.com Ignore the reply address. To email me use the above address with the XY removed. Arius - A Signal Processing Solutions Company Specializing in DSP and FPGA design URL http://www.arius.com 4 King Ave 301-682-7772 Voice Frederick, MD 21701-3110 301-682-7666 FAXArticle: 70203
Thomas Stanka wrote: > > Hi, > > "John Retta" <jretta@rtc-inc.com> wrote: > > The previously mentioned article was an interesting read. I have always > > been a strong > > advocate of synchronous design, and this includes the application of resets. > > My rule of thumb is use synchronous resets in all areas, unless > > exceptional > > conditions arise. There are at least two good exceptions : > > [1] Block of logic may have a clock which after power-up disappears, example > > clock derived from a DDS which might be reset operationally. In order to > > restore outputs > > to inactive states in absence of clock, asynchronous reset needs to be used. > > [2] Sequential logic drives tristate enable control - During powerup or > > during > > in-circuit test mode, clock may not be present, and multiple drivers causing > > contention > > can cause device failure. > > I think you forgot logic that drives external cirquits (which might be > true for most designs). Normaly you start your fpga while powerup of > the pcb. There exists cirquits that wil be destroyed if the fpga > drives its outputs in a state which generates unacceptable currents or > voltages on external parts over a longer periode (beside tristate > busses). > > I prefer a asynchronous reset and an internal logic, that will recover > out of every failure mode. In some cases it might be clever to use a > synchron deassert of the asynchronous reset. I believe that all FPGAs have power on resets that keep the IO in a defined state which is always hiZ AFAIK. As to the synchronous deassert of the async reset, the async reset nearly always has a slow propagation time. So if you clock is at all fast, you will need to have FFs which reset other parts of the circuit and can add a delay once the async reset is released. Being async is just one problem of the async reset. The other part is the slow propagation which will make an sync release an async one. So break up the reset on the critical circuits so that each circuit can be reset with a clearly synchronous release. I hope that is clear... :) -- Rick "rickman" Collins rick.collins@XYarius.com Ignore the reply address. To email me use the above address with the XY removed. Arius - A Signal Processing Solutions Company Specializing in DSP and FPGA design URL http://www.arius.com 4 King Ave 301-682-7772 Voice Frederick, MD 21701-3110 301-682-7666 FAXArticle: 70204
Jan Gray wrote: > > I wrote "Up to 200 kLUTs", but to be precise, the Xilinx press release > states "With up to 200,000 logic cells...". > > Not the same thing, LUTs and logic cells. Sorry about that. Are you talking about the 12% inflation factor, or are logic cells actually something different than a LUT plus a FF? Anyone know what a logic cell is? -- Rick "rickman" Collins rick.collins@XYarius.com Ignore the reply address. To email me use the above address with the XY removed. Arius - A Signal Processing Solutions Company Specializing in DSP and FPGA design URL http://www.arius.com 4 King Ave 301-682-7772 Voice Frederick, MD 21701-3110 301-682-7666 FAXArticle: 70205
Hi all, quick question regarding full vs half duplex. i have a desktop PC with a fiber ethernet card connected to a memec vp20 board with sfp connectors built in. If I connect up this card to another fiber card my BSD box says it is running at full duplex, but when i connect it up to the memec board with a xilinx gigabit MAC core it only runs at half duplex. Can someone explain why this is? Is it in configuration of the gmac core or is it the gmac core itself? thanks MattArticle: 70206
I have been designing using the Virtex-II family in the FG456/FG676 footprints. I imagine that there is a nice left-to-right data flow on-chip which fits nicely onto the CLBs, which increase in address from the left to the right as seen from above. Just like one would expect a schematic of a board of MSI logic to have. I want to make a board with a similar layout (data sources to the left, sinks to the right) but I need to use a bigger chip. I have just noticed that the FF896/FF1152 footprints have the die flipped over. This seems to mean that data on my board will have to flow "right-to-left" inside the FPGA. From larger X addresses to smaller ones. What's the story here? Two questions: 1) Why didn't Xilinx mirror the masks on their larger chips (4000, 6000, 8000) so that flip-chip bonding of the big chip would result in identical CLB placement, as seen from above, as you get with the non-flipped smaller chips? 2) If I take a design which flows left-to-right in a non-flipped package and re-PAR it into a flipped package, will the performance be impacted? Percentage? (For instance, I replace an XC2V2000 in FG676 with an XC2V2000 in FF896, with the same PC board layout of data-in/data-out but a slightly bigger chip area) Thanks for the benefit of your experience: Lawrence NoSpamArticle: 70207
Hi, I wonder who is using lancelot daughter board for altera bios dev board? Can we exchange email for discussion?I have many problem about this. I am green to FPGA system design. SanArticle: 70208
"San San" <clsan@cuhk.edu.hkk> wrote in message news:ca6hfb$2d0l$1@justice.itsc.cuhk.edu.hk... > Hi, > > I wonder who is using lancelot daughter board for altera bios dev board? > Can we exchange email for discussion?I have many problem about this. I am > green to FPGA system design. > > San > > There is a mailing list for the Lancelot at http://www.zytor.com/mailman/listinfo/lancelot . It's not very active most of the time, but if you post questions there you might get some useful help. Other than that, I'd suggest posting to this newsgroup before looking at private email discussions - you'll reach more people who could help, and any information or ideas is available to others (both in the newsgroup and in google searches). If the discussion starts getting too detailed and specific, then it can be moved to email. I got the Lancelot reference design up and running fairly easily on the Cyclone nios board, although I wasn't so successful in my attempts to fiddle with the setup (such as changing the refresh rates).Article: 70209
Hi Christophe, Look on the Lattice website. They have released the part that I mentioned before. Look under clock management. Very easy interface, a lot of features, flexible profile management, etc. Best regards, Luc On 2 Jun 2004 04:27:18 -0700, ccoutand@hotmail.com (chris) wrote: >Hi, > >I have several FPGAs in my design and I want the first FPGA to feed >the other FPGAs with its master clock. The first FPGA use a DCM to >reshape an input clock and get its master clock. >I want the three FPGAs to have a phase-aligned clock. >I just don't know how to do it since the master clock of the first >FPGA which is the output clk0 of the DCM has to go through an output >buffer to access a pin to be distributed to the others FPGAs but then >the clock would have a delay compare to clk0. >Is someone can help me with that ? > >Thanks. Christophe.Article: 70210
glen herrmannsfeldt <gah@ugcs.caltech.edu> wrote in message news:<Oxmxc.15640$HG.9409@attbi_s53>... > (snip) > > To me, a global asynchronous reset driven from an external > pin, or by the FPGA itself, is fine. The user of the system > is then responsible for any required timing. (I believe most I was puzzled by your saying this. It seems clear that neither the system nor user (ie external to the chip) has any control over the reset skew internal to the device, and that they cannot fix a startup problem caused by this. Perhaps the answer is application-dependent: I can see global external async reset being fine for the class of designs which reset into an idle state, ie they don't do anything until they receive an external trigger. > FPGA will do asynchronous reset on all FF at initialization time.) > > Otherwise, I would agree that asynchronous reset driven by other > parts of the design results in an asynchronous design. > > -- glen My two cents, -rajeev-Article: 70211
rickman wrote: > Jan Gray wrote: > >>I wrote "Up to 200 kLUTs", but to be precise, the Xilinx press release >>states "With up to 200,000 logic cells...". >> >>Not the same thing, LUTs and logic cells. Sorry about that. > > > Are you talking about the 12% inflation factor, or are logic cells > actually something different than a LUT plus a FF? Anyone know what a > logic cell is? Xilinx's definition is: "Logic cell = One 4-input Look Up Table (LUT) + Flip Flop + Carry Logic" If I recall, they are not including the MUXFx's that are after the LUTs. As you correctly surmised, Xilinx feels those are worth an additional 12% (so there are ~12% more logic cells than there are LUTs). Are they right? The high speed (311 MHz, therefore heavily pipelined) design I'm working on right now uses 457 MUXF's plus 6170 LUTs So they're only off by 50% or so. A slower design that isn't nearly as well pipelined uses: 2618 MUXF's plus 22832 LUTs Which is noticeably less than 12%, but closer to their marketing number. MarcArticle: 70212
Matthew E Rosenthal wrote: > Hi all, quick question regarding full vs half duplex. > i have a desktop PC with a fiber ethernet card connected to a memec vp20 > board with sfp connectors built in. If I connect up this card to another > fiber card my BSD box says it is running at full duplex, but when i > connect it up to the memec board with a xilinx gigabit MAC core it only > runs at half duplex. > Can someone explain why this is? > Is it in configuration of the gmac core or is it the gmac core itself? > > thanks > > Matt Howdy Matt, Have you tried looping them on themselves to see if you can isolate the offender? And/or you could plug each into a known good device (like a GbE switch)? Although half-duplex is technically possible with GbE, it is never done in practice. Hopefully the Xilinx core doesn't waste too many LUTs supporting it. MarcArticle: 70213
I solved the problem - we got a very bad batch of circuit boards, each one has a different set of etch problems... Although the main power supplies on the board are OK, they actually make good connectivity to a random number of power pads on the FPGA. Each board has a different set of 'good' connections between the power supply and the FPGA. We're going to scrap the entire batch of *assembled* boards. What a pisser. John P johnp3+nospam@probo.com (John Providenza) wrote in message news:<349ef8f4.0406031111.40c5fad7@posting.google.com>... > So I thought, "how hard can it be to download to a V2P7 part using the slave > serial mode?" > > Well, pretty hard. > > I have a 'master' FPGA that loads a 'slave' FPGA using the slave serial > programming mode. The download process will work over and over, then > suddenly decide not to work. Then it doesn't work for a long > time. Very flakey. > > The master FPGA drives PGM_N, CCLK, and DIN to the slave using > 2.5V CMOS outputs. The signals all look clean, no over/undershoot, > no glitches in the transition region, etc. The data setup/hold is > over 100ns, the data is setup before the rising edge of CCLK and > is held until after the falling edge. By default, the clock is > held low and pulses high for about 70ns when new data is ready for > the slave. > > The master FPGA has very simple logic that interfaces to a PC. The PC can > assert the PGM_N signal and can look at the status of INIT_N and DONE to > monitor programmng status. The PC can also write a 32 bit word into > the master which then serializes the data to the DIN and CCLK pins on > the slave. > > Things to note: > 1) The download is flakey, but since it can succeed multiple times > in a row, I'm pretty sure that the bit order of data being shifted > into the slave FPGA is correct. > 2) I've added logic to grab the output data using the output CCLK clock > so the PC can verify that the data has not been corrupted. It always > reads back correctly. > 3) I've turned the 'debug' flag on in bitgen so I can look at the > DOUT pin. When things work, I get lots of pulses out of DOUT, > when download fails, DOUT will either pulse low once or pulse > low multiple times, but stop pulsing before the download completes. > 4) Occasionally, INIT_N is asserted by the FPGA before the download is done. > 5) The data loads to the slave in bursts of 32 bits, ie, the PC sends > a 32 bit word, the master serializes it, then everything waits while > the PC realises it's done and loads the next word. It's about > 150 usec between 'bursts'. > > 3 & 4 lead me to believe the slave FPGA is receiving corrupted data or > a bad clock occasionally, BUT... the signals look fine. > > We have M[2:0] and HSWAP_EN pulled to 2.5V using 10K resistors. PWRDWN_N > is floating as is VBATT. The JTAG signal are also pulled to static levels. > > It almost acts like there's a floating pin - if it's in one state everything > is fine, if it drifts or comes up in the 'wrong' state, the download fails. > > Has anyone run into problems like this? My local FAE is stumped as am I. > > Thanks for any insights! > > John ProvidenzaArticle: 70214
Hi Rick, Yes I've taken the route of developing my own. The opencores one didn't cut it - 2000 LEs and 64 MHz fmax, which is much too big and slow for me (mind you, it lets you change SDRAM timing parameters on the fly via registers, but that's a feature I don't need). Like you say, after poring over some Micron datasheets and drawing up a state machine it didn't seem too difficult. My first compile has fmax 135 Mhz without setting a target fmax in Quartus, and 780 LEs (could be less with register packing option turned on). I also added something I think might be useful, which is piling up refresh events so that they are done only at the end of a long bursts, instead of interrupting them. Hopefully this will help reach theoretical max bandwidth in the case of long bursts within open rows followed by a bit of idle time (exactly the model of my video app). Now comes the task of testbenching and debugging. I was hoping I didn't have to reinvent the wheel, but I think I'll have exactly what I want when I'm done. -- Pete > I know that it takes time to design your own module and debug it. But > an SDRAM controller is actually very easy. I have built them before and > did not have *any* trouble with it other than a bug in my state machine > that would create a hang if the local bus (not the memory bus) was > operated in a way I did not expect. Of course, we were only running at > a slow rate. But most of the problems you will have with 133 MHz will > be external to the chip, not internal. It is not hard to design logic > that has a pipeline delay of 7.5 ns if you are at all careful about your > design. And memory controllers almost have to be pipelined if you want > speed. > > If you can't find a core you like, I can guaranty a simulated design in > a week once the specs are fully nailed down. > > -- > > Rick "rickman" Collins > > rick.collins@XYarius.com > Ignore the reply address. To email me use the above address with the XY > removed. > > Arius - A Signal Processing Solutions Company > Specializing in DSP and FPGA design URL http://www.arius.com > 4 King Ave 301-682-7772 Voice > Frederick, MD 21701-3110 301-682-7666 FAXArticle: 70215
"Jason Berringer" <look_at_bottom_of@email.com> wrote in message news:n4txc.32812$sS2.988493@news20.bellglobal.com... > My recent task had me interacting with XST and altering the settings to > highest effort to achieve the timing constraint that I had. I eventually had > to use the reentrant route feature to finally make the constraint. I would > imagine that using some floorplanning might have helped me out, but as I > have yet to get into the basics of floorplanning yet I felt to try and just > push the tools more. Since I brought it up, do you use floorplanning when > doing a desing, and if so, where is the best place to start. Is the idea to > get things as close as possible keeping the routing as short as possible, or > just to focus on specific areas that might use faster clocks, and require > short delays? If you need to resort to placing your logic, I'd suggest you first consider RPMs - relationally placed macros - to deal with critical paths. If you have a 25MHz clock that has one flop that always goves you troubles, the paths that drive that flop may be more routing than logic. A good rule of thumb for "decent" routing is 50% routing, 50% logic as reported by the timing analyzer. If you're at 60% logic and you're still having troubles meeting timing, look seriously at ways to redo the logic. If you're at 70% routing, RPMs can place critical components within that path closer together without tying them down to an absolute slice. I'll use explicit LOC location constraints on signals that interact with the I/O cells that have been LOC'ed for my PCB pinout but for logic-to-logic paths I typically use the RLOC relative placement constraints. As far as using the floorplanner tool, I haven't because of early bugs when the tool was first coming out. The user constraints file can include everything you need; you can often put the constraints in your source code if you find that a better place to document your placement constraints. In general, you will get better results if related logic is confined to a specific area of the chip using the AREA_GROUP constraint on a module; several wildcard-selected signals can also be kept in a small range to help meet timing on particular paths without resorting to RLOCs. There are many ways to skin the cat. Sometimes it's the place & route that's giving you troubles. Sometimes it's the synthesizer that's throwing in 7 levels of logic onto a critical signal that could have been included in the last level of logic. It's times like that that it's better to recode to coax your synthesizer or seriously look at a new synthesizer that will know better in the first place. Timing is often the most annoying part of high performance design but the results from attention to detail can get you into a lower speed grade device or a higher margin design.Article: 70216
Anybody here attended Altera course titled "Designing with NIOS II and SOPC Builder"? It is instructor-led, 8 hours course, $195 per student. Is it worth going?Article: 70217
Lawrence, To answer 2), the performance won't be impacted by flipping. There are just as many left to right connections as right to left. And vice versa! Have a look in the FPGA editor if you like. This answers 1)! Cheers, Syms. "Lawrence Nospam" <llbutcher@worldnet.att.net> wrote in message news:JIzxc.31862$Gx4.18088@bgtnsc04-news.ops.worldnet.att.net... > > 1) Why didn't Xilinx mirror the masks on their > larger chips (4000, 6000, 8000) so that flip-chip > bonding of the big chip would result in identical > CLB placement, as seen from above, as you get with > the non-flipped smaller chips? > > 2) If I take a design which flows left-to-right in > a non-flipped package and re-PAR it into a flipped > package, will the performance be impacted? Percentage? >Article: 70218
Do you have an external pull-up on TDO of the 2v2000? You need one. Clark Pope wrote: > I am able to program and scan Altera devices on my scan chain but when I go > to load the Virtex2 I get an IR_Capture failure because the LSB on the > Virtex is not being set. I am using a serial prom on that chip so I'm > worried that it is trying to load at power up and messing up the JTAG in the > chip. Anyone ever seen a problem like this? > > >>>INFO:iMPACT:1206 - Instruction Capture = > > '0101010101000000010101000101000101010101010000000110101101' > >>>INFO:iMPACT:1207 - Expected Capture = > > '0101010101XXXX01XXXXXXXXXXXXXX010101010101XXXXXX01XXXXXX01' > > '0101010101XXXX01XXXXXXXXXXXXXX010101010101XXXXXX01XXXXXX01' > >>I generated the third line by reading the .bsdl >>files for what the INSTRUCTION_CAPTURE value should be. So everything >>matches except that the XC2V2000 is not setting the LSB to one. It must be >>kept in reset or something... >> >>attribute INSTRUCTION_CAPTURE of XC2V2000_FG676 : >>entity is >>-- Bit 5 is 1 when DONE is released (part of startup >>sequence) >>-- Bit 4 is 1 if house-cleaning is complete >>-- Bit 3 is ISC_Enabled >>-- Bit 2 is ISC_Done >> "XXXX01"; > > >Article: 70219
Unfortunately, Windows XP was not supported with 4.2i Xilinx software. The fcat that some tools work is merely serendipity. Version 5.X and 6.X are the first set of Xilinx tools that fully support WinXP. You can download iMPACT only using WebPACK (choose configuration tools only) which creates an isolated installation that will not taint your 4.2 toolset Hendra Gunawan wrote: > Hi folks, > I have Xilinx ISE 4.2i installed in Windows XP system. Everything runs fine > except the Impact program (the one you use to download the bitstream). It > said unable to detect the cable or something like that. What can I do to > make Impact run with Windows XP? > I look at Xilinx website and they recommend to download Impact program that > comes with Webpack 6.2i and use it to download the bitstream. But I would > rather not to do that. I don't feel comfortable to have two different ISE in > the same system. > > Thanks in advance! > > Hendra > >Article: 70220
Consider you have 100ppm crystal oscillator, 80ps jitter driving a DCM that is configured to output x1.5 the input frequency. What will happen to the jitter? Will it increase or decrease? And by what factor?Article: 70221
Hello, I get following warning in the "Simulate Post Translate VHDL Model" : WARNING:NetListWriters:303 - Unable to preserve the ordering for port bus S on block dft8 using the data S<0><7:0>. I don't know where it comes from and what to do. On the Xilinx pages I can't find any help. The first Simulation tools runs without an error. The Port Declaration is: Port ( C : in std_logic_vector(7 downto 0); dataready : in std_logic; clk : in std_logic; reset: in std_logic; S : out Syndrom); Syndrom is defined in a package as type Syndrom is array (0 to r-1) of std_logic_vector(7 downto 0); If you have any idea what the reason for the message please write back RudiArticle: 70222
Being an engineer myself, I've never much liked the difference between LUTs and logic cells. However, the story goes beyond just MUXFs. The Xilinx LUT delivers a few other unique capabilities ... * 16x1 distributed RAM, either single-ported or with separate read/write and read-only ports - Roughly equivalent to 16 'D' flip-flops, two 16:1 output select MUXs, and 16 4-input decode gates * 16-bit serial-in, serial-out shift register with tap select output - Roughly equivalent to 16 'D' flip-flops, and 16:1 output select MUX Add that into the mix, I believe it comes out well ahead of the 12% fudge factor. It is admittedly a fudge factor because not every design (okay, except for Ray Andraka ;) will use every LUT as distributed RAM or shift registers. --------------------------------- Steven K. Knapp Applications Manager, Xilinx Inc. General Products Division Spartan-3/II/IIE FPGAs http://www.xilinx.com/spartan3 --------------------------------- Spartan-3: Make it Your ASIC "Marc Randolph" <mrand@my-deja.com> wrote in message news:c9mdnbfiOc81YVvdRVn-uA@comcast.com... > rickman wrote: > > Jan Gray wrote: > > > >>I wrote "Up to 200 kLUTs", but to be precise, the Xilinx press release > >>states "With up to 200,000 logic cells...". > >> > >>Not the same thing, LUTs and logic cells. Sorry about that. > > > > > > Are you talking about the 12% inflation factor, or are logic cells > > actually something different than a LUT plus a FF? Anyone know what a > > logic cell is? > > Xilinx's definition is: > > "Logic cell = One 4-input Look Up Table (LUT) + Flip Flop + Carry Logic" > > If I recall, they are not including the MUXFx's that are after the LUTs. > As you correctly surmised, Xilinx feels those are worth an additional > 12% (so there are ~12% more logic cells than there are LUTs). > > Are they right? The high speed (311 MHz, therefore heavily pipelined) > design I'm working on right now uses > > 457 MUXF's plus > 6170 LUTs > > So they're only off by 50% or so. A slower design that isn't nearly as > well pipelined uses: > > 2618 MUXF's plus > 22832 LUTs > > Which is noticeably less than 12%, but closer to their marketing number. > > MarcArticle: 70223
The 100 ppm is irrelevant, it describes a constant frequency error. The factor 1.5 is also irrelevant. The output jitter will equal the input jitter plus about 50 ps coming from the trim-tap uncertainty of the DCM. Peter Alfke David Joseph Bonnici wrote: > > Consider you have 100ppm crystal oscillator, 80ps jitter driving a DCM > that is configured to output x1.5 the input frequency. What will > happen to the jitter? Will it increase or decrease? And by what > factor?Article: 70224
I agree to 100ppm is irrelevant. But according to http://www.xilinx.com/applications/web_ds_v2/jitter_calc.htm the factor does matter. I think you have to use the CLKFX output to multiply by 1.5, and the factor is important. Or am I making a mistake, Peter? Cheers, Syms. "Peter Alfke" <peter@xilinx.com> wrote in message news:40C74FC4.1ECD01C4@xilinx.com... > The 100 ppm is irrelevant, it describes a constant frequency error. > The factor 1.5 is also irrelevant. > The output jitter will equal the input jitter plus about 50 ps coming > from the trim-tap uncertainty of the DCM. > Peter Alfke
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z