Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Hi all, When I simulate netlist with gated clock, I found the output is very different with what I see in RTL level. So I add tfile in NCSim to forbidden the delay and timing check in global scope (Because the design have no memory like RAM/FIFO). The netlist waveform seems to be better, but there are also some trivial differences between RTL and netlist waveforms (e.g. some signal have one clock advance and some signal have one clock delay). I guess gated clock does not behavior like original clock and introduce race. But how to understand gated clock simulation behavior? Any comments/reference will be appreciated! Thanks! Best regards, DavyArticle: 111626
Hi Davy, When you say netlist simulation do you mean a timing simulation? And when you say RTL level simulation Do you mean a behavioral simulation? If my assumtions are correct think about the following: If you are using gated clocks, the gate has no delay in behavioral simulation, so your circuit works as expected. But in Timing simulation the gate and the associated routing creates a delay to the clock signal of the connected registers. The effects depend on the desired clock speed, and may be significant as you already observed. To overcome this you should consider using Clock Enable inputs rather than gating the critical clock net. And, yes you guessed right. A gated clock doesn't behave like the original clock because it's a totally different signal. You can compare it to trains. One on rails (clock net), the other not (normal routing ressources). Guess wich one misses it's schedule at the next station. :-) Have a nice simulation Eilert Davy schrieb: > Hi all, > > When I simulate netlist with gated clock, I found the output is very > different with what I see in RTL level. > > So I add tfile in NCSim to forbidden the delay and timing check in > global scope (Because the design have no memory like RAM/FIFO). > > The netlist waveform seems to be better, but there are also some > trivial differences between RTL and netlist waveforms (e.g. some signal > have one clock advance and some signal have one clock delay). I guess > gated clock does not behavior like original clock and introduce race. > > But how to understand gated clock simulation behavior? Any > comments/reference will be appreciated! > Thanks! > > Best regards, > Davy >Article: 111627
Hi Avishay, One advantage is the vendor independance. You can check the performance of your design on platforms of different vendors. Also you don't need to install different software and waste time on learning how to use it efficiently. Instead you can focus on your tool and use it with maximum efficiency. Now You are designing with Altera FPGAs, but there may come the time when you want or need to change to another Vendor. There may be more advantages and even disadvantages (e.g. the support for the latest chips will always come with a little delay) , but the above is one major point of vendor independant software. Have a nice synthesis Eilert avishay schrieb: > Hello all, > I'm designing with Altera FPGA with their Quartus software. My company > also have license for Mentor Graphics' Precision synthesis tool. From a > > very brief check, it seems that Quartus' built-in synthesizer gives > comparable results to the Precision. I wonder if there is any advantage > > in using an external synthesis tool. What does it give me more than > Quartus has to offer? > > Thanks, > Avishay >Article: 111628
Marc Battyani schrieb: > "eric" <erixx@gmx.net> wrote > >> There is another simple possibility to speed up calculations. >> Do you know that the GPU on a modern graphics card can do matrix >> calculations a lot times faster than a CPU? There are several >> SDKs and you wont need special Hardware for that. >> >> Xilinx and Altera stuff takes a lot of time and a high risk >> because if you really have a problem to solve the engineers from there >> say: "This is not supported"... You get only help if you a buying in large >> quantaties. >> >> And in scientific computing you never have a large quantatie. >> >> >> So use alternativly a GPU for fast calculations like FFT and Sorting. >> >> My favorite SDK is BrookGPU >> (http://graphics.stanford.edu/projects/brookgpu/). >> Easy to use and a lot of out oft the box runnig examples. It works with >> every modern 3d Accellertor because it uses OpenGL and DirectX. >> >> You will implement the stuff in a week or two. In VHDL you'll need half a >> year for the simulation and timig stuff for the same result. >> >> Think about it! > > GPU have a lot of limitations like single precision FP, memory > bandwitdh/latency and organization, computation model, etc. > > Well it's the old ASIC(fixed) vs FPGA topic ;-) > > Marc > > But you get fast results in development and processing and if this is your goal and not the "i can do everything" with my FPGA it's the better choice. In most cases are 24Bit FP enought and you dont have to pay anything for the processing hardware. At least you can split the processingtask between CPU and GPU and get faster results. FPGA is cool if you design for stand alone devices but for scientific calculation are better and easier to implement solutions out there. It depends on what you want to do. And of course if you have time and money enough use the i can do everything solution. EricArticle: 111629
al99999 schrieb: > Hi everyone, > > I am trying to add a USB 480Mbps interface to my fpga design which will > stream data in one direction only from the FPGA to PC. > > The avnet evaluation board I have has the Cypress 68013 chip. > > Can anybody point me in the right direction of where to start (or which > is the most suitable application note on the cypress website), and if > there is any sample cores to embed on the fpga. > > Thanks very much, > > Al > Have you thought about the Xilinx ML403 with Gigabit Ethernet. There is a solution called GSRD with Jumboframes. You can get about 500Mbits/s via socket communication with PC. And i made a monta vista linux run on PowerPC in Virtex 4. This could also be a solution if your not fixed to USB. EricArticle: 111630
Hi Andreas Thanks for the reply. I thought about the problem yesterday. Originally the objective was to do on board poly phase filtering to reduce the bandwidth and then send the data to be real-time processed on the PC. The card now is just used to capture data and supposidly STREAM it to the PCIe bus. The PCIe controller would then use its direct memory access controller to store the data in the PC RAM. Linux/Windows would run 'real time' processing on this data. I have re-read the data sheet of the ADC and have found that at best the Effective Number of Bits is 7.2. Therefore i can do what you suggested and just reduce the bandwidth in this manner to 7 bits. The problem was the protocol overhead, but with buffering (im using the FPGA on board blocks RAMs and an external QDR SRAM) and only using 7 ENOB i think i will be able to confortably stream the data to the PCIe bus. I have also spoken to my Prof. and he says that what we can also do is reduce the sampling rate and thus make the input data rate approximately 800MB/s which would be perfect. Can you forsee any other problems? Thanks a million Jason Andreas Ehliar wrote: > On 2006-11-06, slkjas@gmail.com <slkjas@gmail.com> wrote: > > Hi > > > > Im trying to design a high speed data capture card. Im using a Lattice > > ECP2M-50 FPGA with the one-board SERDES units (MGBT in Xilinx > > dtasheets). Im using a MSPS Nation ADC. This dual ADC has a output of > > 1Gb/s and thus the combined x4 lane PCIe will match this rate. HOWEVER, > > if there is latency on the PCIe bus more than 100us then my RAM inside > > the FPGA will overflow. I need to know bus latency between TLP's > > because i need to know if I require external RAM or if the design in > > possible! > > > First of all, this post assumes you mean Gigabyte/s (GB/s) since you are > talking about matching the rate with a 4 lane PCIe configuration. > > > I see at least one real problem here: > > PCIe has a link speed of 2.5 GHz per lane. After 8B/10B decoding you > will have 250 MB/s. With 4 lanes you will get 1000 MB/s. (Exactly what > you need.) > > Unfortunately you will have protocol overhead. This means that you > will not even theoretically be able to push data in the speed required > by your application (Assuming that you truly need 1 GB/s and not say > for example 950 MB/s.) > > Perhaps you can design the card so that in case of emergency (your > PCIe host cannot accept your packets fast enough) you will reduce > the precision of your samples from 8 bits (which I assume you use) > to 4 bits and somehow tell your application that the precision is > not enough. > > > But quite a lot depends on your application. For example: > 1. Do you just want to store as much data as you can fit into your main > memory? > 2. Do you intend to do some sort of real time processing on the data > in your host? > 3. As mentioned earlier, do you truly need exactly 1GB/s? In that case, > you will need more than 4 lanes... > > If 1, perhaps you can use 7 bits / sample instead to reduce the required > bandwidth. If 2, perhaps you can do some processing on the data in the FPGA > to reduce the bandwidth. > > > As for latency, my guess is that the latency should be far less than 100us. > But personally I would not feel very safe unless I had some guarantee from > the host system that a certain bandwidth to main memory was reserved for > my PCIe card. (At least if the required bandwidth is very close to the > theoretical maximum when using maximum sized packets.) > > I guess your google skills are as good as mine, but I could point out > http://nowlab.cse.ohio-state.edu/publications/journal-papers/2005/liuj-ieeemicro05.pdf > where the latency and bandwidth of PCI express based Infiniband HCA:s are > tested. The latency of a small message is around 3.8 us in this case so > from that point of view, 100us should be more than enough. > > /AndreasArticle: 111631
Hi, There is no way to tell MicroBlaze not to use RLOC. You can however tell the ISE tools to ignore RLOCs. The map needs the parameter -ir Göran <me_2003@walla.co.il> wrote in message news:1162829316.487307.248680@m73g2000cwd.googlegroups.com... Hi Goran, Is there any way to instruct the platgen not to use RLOC for the microblaze? It is just that now (after doing what you suggested above) my mapping went well but the PAR fails - It says that a certain core (xilinx reed-solomon decoder) cannot be placed. I figured out that maybe the RLOCs of the microblaze causes this problem (my chip utilization is under 50%). Thanks in advance, Mordehay. error snippet from par log file : -------------------------------------------------------------------------------------------------------------------------- Starting Placer Phase 1.1 ERROR:Place:346 - The components related to The RPM "CORE/RFEC_i/rs_dec_i/rs_docoder_i/rs_dec/dec/sy/nig1/ffo1/r1" can not be placed in the required relative placement form The following components are part of this structure: SLICEM CORE/RFEC_i/rs_dec_i/rs_docoder_i/N207 SLICEL CORE/RFEC_i/rs_dec_i/rs_docoder_i/N19412 SLICEL CORE/RFEC_i/rs_dec_i/rs_docoder_i/N19411 SLICEL CORE/RFEC_i/rs_dec_i/rs_docoder_i/N19409 SLICEM CORE/RFEC_i/rs_dec_i/rs_docoder_i/N208 SLICEM CORE/RFEC_i/rs_dec_i/rs_docoder_i/N204 SLICEM CORE/RFEC_i/rs_dec_i/rs_docoder_i/N202 The reason for this issue is the following: This logic may be too large or of too irregular shape to fit on the device. --------------------------------------------------------------------------------------------------------------------------Article: 111632
hi you can get FFT/IFFT vhdl code from Opencore.org It works fine for meArticle: 111633
I've just readed the user's guide of the cable which i use. There is an RC-network Thanks you all there. Cheng Georg Acher schrieb: > "uvbaz" <uvbaz@stud.uni-karlsruhe.de> writes: > >Hi Jonathan, > > > >No, it is not insulting at all. > > > >Auctually there is an Agilent Softtouch Probes on the FPGA Board(for > >Logic Analyser), i connected the oscilloscope to the end auf the > >Agilent Cable, and got the stranger picture. > > The LA probes have an internal RC-network (about 90K with 8.2p parallel) and > require the correct input stage. So they are not usable for normal scope inputs. > > I've got biten the other way. I tried to feed the signals without the probe > heads in the LA and wondered about the strange bit patterns ;-) > > -- > Georg Acher, acher@in.tum.de > http://www.lrr.in.tum.de/~acher > "Oh no, not again !" The bowl of petuniasArticle: 111634
On 2006-11-06, Matthew Hicks <mdhicks2@uiuc.edu> wrote: > I know there exists a path for the XUP board to communicate with a PC via > the USB port because Chipscope does it and I have seen projects using their > own boards but with USB as a configuration (JTAG) and communication option. > The user guide doesn't give any insight on the issue and I tried searching > for info on the web to no avail. Any tips or suggestions? You want to google for BSCAN and Spartan3 (or virtex2, you don't state what XUP board you are using). Another link with information about this subject is http://www.s3group.com/system_ic/gnat/ /AndreasArticle: 111635
John_H <newsgroup@johnhandwork.com> writes: > Martin Thompson wrote: > > Maybe I'll regret jumping in here, but here's my take :-) > > We designers want standard interfaces to FPGA bits and bobs. > > <snip> > > It's nice to hear another opinion, but.... I have a software guy that > wants us FPGA designers to standardize on 16550 UARTs for our embedded > designs. WTF?! This device is from a quarter century ago and made to > work with serial links that are quite a bit worse than the embedded > applications I have today. > > WHY is a standard interface desired when much of what "was" there can > become completely superfluous? > Well, to make life easier for *some* people. But read on... > Personally - and this is opinion as it is with the other posts - I > want to get the best performance/cost/size balance I can strike. Why > should I burden my design with bloated code to pacify others who want > to blow my performance, increase my cost, or bloat the size? > I agree - you shouldn't. > If you want a standard interface, make one. PLEASE don't force me to > use silicon with hard-coded features that are much less than what they > could be all for the sake of conformance. > I (and I think KJ) don't want the silicon defined by the interface. If you need all the hairy features of the V-5 FIFO, then it's there for you to use. But for those of use that just need a simple FIFO with a write and a read and some flags, then I shouldn't need to instance a different block for each bit of silicon I target. The interface *to my VHDL (or verilog)* is what is constant. The silicon can do what it wants. And if in future if ends up not able to meet my simple interface (which I doubt for a memory or FIFO type thing) then I'll accept I need to change things. All this applies in spades to RAM blocks, most of the time, all I want is an address bus (or two), a write enable and a data bus (or two). The Xiinx way forces me to instantiate specific sizes of blocks, which change from generation to generation, when all I want to do is say I need a 2Kx8 RAM. > The only time I would care to see a standard interface pursued is if > there is ZERO impact to my engineering tradeoffs. > And that is what I would want to see also. Easy for those who can gain from standardisation. "Power" available for those who need it. Cheers, Martin -- martin.j.thompson@trw.com TRW Conekt - Consultancy in Engineering, Knowledge and Technology http://www.conekt.net/electronics.htmlArticle: 111636
>=CF/=C7 Jonathan Bromley =DD=E3=F1=E1=F8=E5: >> What is the signal? Thank you for answering my question. The signal will be from a lidar system (laser based radar system) and i need the dopler information from it. It is a project for weather information. So i want to do fft @1024 and filtering.Article: 111637
scott moore wrote: > To be a > viable replacement for general computing, FPGAs must have a high level > language facillity associated with them, and existing efforts in that > direction (C to hardware compilers) result in near parity with modern > high speed processors. The true speed advantage does not occur until > the design is transferred to full ASIC. That's only true for some C=>FPGA implementations, in particular those that are in some form, cycle accurate, like Celoxica's product Handel-C. Others, which more, or less, synthesize like similar Verilog/VHDL will see as much parallelism as the native algorithm has. After learning the code generation, writing FPGA specific code, will generate significantly faster execution. In particular, unrolling loops heavilty, and generating fine pipelining, is relatively easy for some algorithms, that can see two to three orders of magnitude per midsized FGPA, and can easily be cascated in a dense mesh to build special purpose Petaflop super computers in the $3-50M range. The specific advantage of FPGA's is achieved by designing around LOTS of LUT based memories, which completely avoides the memory serialization which fundamentally lowers performance on most traditional architectures. In addition the coupling between processing elements, is direct wired connections, rather than a bit/word serial bus. There are clearly algorithms which are serial, with little to no parallelism, that FPGA's simply can not help, and are in fact the same speed or slower than a traditional CPU.Article: 111638
Hi Bob, I've matched the trace length withing 100mils is that OK? BTW, i use Spartan3 FPGA. Does having the DCI option in Spartan3 will eliminate the external termination resistor at the receiving end? One more thing, the other FPGA i'm using is Altera Cyclone-II FPGA(for this chip-to-chip connection)--and cyclone fpga needs at least 3 external resistors, my question is, can Spartan3's LVDS_DCI compensate for the external 'resistor network' at cyclone2, i mean if the DCI is enabled in spartan3 will there be no need for that resistors when cyclone2 is driving the LVDS line? Yy Ayon kay Bob: > "yy" <yy7d6@yahoo.com.ph> wrote in message > news:1162860919.587459.46830@m73g2000cwd.googlegroups.com... > > Hi i'm currently working on a high-speed chip-to-chip serial interface > > FPGA interface, i would like to know some suggestions regarding FPGA > > differential signalling; especially the trace matching of pair of LVDS > > signal, and the whole Channel (set of LVDS signals; Tx_Frame, Tx_Clk, > > Tx_Data) etc. > > My application is for 622Mbps signalling rate. > > Anyone has an experience on this? > > > > It's important to keep all signals' p-to-n length matched closely. This will > insure that there is a clean cross between the p and n inputs of the input > comparator during one-to-zero and zero-to-one transitions. You don't want a > p input to change from zero to one while the n input is still stuck at a > one. > > For signal-to-signal length matching (in a source synchronous bus), you must > consider the outputs' clock-to-out delay matching and the inputs' setup and > hold time. There is no way to determine the length matching requirements > without knowledge of the output and input characteristics. > > I would recommend using a FPGA family that has internal 100ohm differential > termination (within the input block). For Xilinx, this is V2Pro and above > (watch the VCCO requirements carefully). This will make layout easier and > give you the best setup and hold margin because at 622Mbps you're gonna need > all you can get. > > BobArticle: 111639
Nico Coesel wrote: > Still, the amount of processing power a modern PC processor can > deliver is enormous. It is problably more cost effective to optimize > an algorithm to run parallel on 10 PC's than to develop a specific > FPGA solution. If space is a constraint, the answer is in using blade > servers. Take a certain class of NP backtracking problems, like nqueens, and what you get with 10 clustered CPU's is 9.9x performance increase if lucky. A medium sized FPGA can hold a few hundred nqueens engines running at 200MHz, for an effective performance of about 100X. Other algorithms like RC5 cracking engines, either fully unrolled, or a massive bit/digit serial version, can match or exceed the performance of hundreds of clustered processors using mid to large FPGAs. Pencil and paper estimates suggest a few dozen LX200 parts are equivalent to the entire DNet effort, at a fraction of the power and cost of the thousands of DNet machines. That is assuming you can actually get enough power into the chip, and cool it at max clock rate for a fully utilized part. Performance of some of the high end Altera parts, and the Virtex-5 parts is expected to be significantly better. Bit/digit serial floating point, using mulitply-accumulate (MAC) architectures, easily implements dense 2D and 3D physics models using Gauss-Seidel Relaxation type solvers. These MAC's easily interconnect with little routing fuss, and easily can be clocked at near MAX clock rate, and produce an iteration every word-length number of clocks, or so. A dense mesh of FPGAs can handle problems much bigger than even a fast large clustered processor can solve, with the communications costs between cell groups simply FPGA pin speeds. Pin I/O can be a fraction of internal clock rate, and still give good convergence. In short, a water cooled cubic foot of large FPGAs, has higher performance for these applications than the fastest super computers that exist today, and a fraction of the power and cost, and you don't need a basket ball court sized machine room, or the huge amount of air flow for cooling (which wastes energy moving it all). The cubic foot of FPGAs, does however require about 200-500KW of power, which is an interesting power/cooling project on it's own. The project is most easily done by running the FPGA stack directly off a 480V 3PH power grid, with the mesh planes stacked to the rectified DC voltage, and the planes AC coupled (LVPECL) to avoid level conversion. If space is a constraint, the above solution will run on a desk top for a large physics problem, and require 10,000sqft of 42U racks for an equiv PC blade cluster solution (and a dedicated power plant)Article: 111640
Hi Goran, Thanks again for your response. when using the -ir option I managed to get my PAR right, but now I'm worried about the results of not using the RLOC constraints. Is there a danger that my design wont work properly now ? Thanks again, Mordehay.Article: 111641
No, The only thing that might happen is that you will not reach the timing constraints. Göran <me_2003@walla.co.il> wrote in message news:1162900739.332371.6350@m73g2000cwd.googlegroups.com... Hi Goran, Thanks again for your response. when using the -ir option I managed to get my PAR right, but now I'm worried about the results of not using the RLOC constraints. Is there a danger that my design wont work properly now ? Thanks again, Mordehay.Article: 111642
Hi, I am using ML403 board with Virtex-4. I need to generate and download a PROM file into the PROM and then use it after I switch it OFF and then again ON. Can anyone please help me out in doing so?? Thanks and regards, SandipArticle: 111643
Use the IMPACT tool in the ISE. after implementing your design just open the IMPACT tool and follow the instructions it should be pretty simple. Mordehay. Sandip wrote: > Hi, > > I am using ML403 board with Virtex-4. I need to generate and download a > PROM file into the PROM and then use it after I switch it OFF and then > again ON. > Can anyone please help me out in doing so?? > > Thanks and regards, > SandipArticle: 111644
FWIW, I just tried ISE 8.2.03 running on a WinXP/SP2 machine and created a new project on a samba drive on a FC3 machine (mapped to Z:\ drive). Everything seems to be working on. Cheers, Jim http://home.comcast.net/~jimwu88/tools/ jetmarc@hotmail.com wrote: > Hi, > > I'm having problems hosting a ISE/EDK 8.1 SP3 project on a file server. > > The server runs Samba3 on Redhat. The workstation is WinXP and mounts > a server folder with the "drive letter" method. > > When creating a new project, I get errors like "the destination folder > is read-only" (which is not true). When I create the project on a > local disk and copy it over to the network share later, most things > work. Only the EDK submodule continue to produce errors, for example > "error deleting ./src/microblaze - folder does not exist". > > EDK starts up with a message "CMD.EXE error, the current directory > (\\server\share\folder\) may not be a UNC network path, changing to > C:\". To me it seems that this may be the root cause for the > subsequent errors. > > Note that I never used the \\server\share path, but rather worked > through a driveletter mount. The ISE/EDK tools seem to resolve the > driveletter to a fully qualified network path, and then choke on it. > > Is there a solution for this? I would really like to host the project > on the server. I wouldn't mind to install other filesharing services > (or a windows file server), if that helps. > > Regards, > MarcArticle: 111645
Thanks very much, I'll try this and see what I can do. Al > Use the 'slave fifo' interface of the 68013. > Use it in one of its synchronous modes. (I tend to clock the FPGA off the > 48MHz clock from the 68013). > After that, it's really simple, provided you've set the 68013 up correctly - > you watch the appropriate 'fifo-full' flag from the 68013 and drive data in > accordingly. You need the technical reference manual for the 68013 (FX2), > not just the datasheet. (Both from Cypress' TERRIBLE website) > If you've never done anything with USB or device drivers before, start with > the Cypress "USB developer uStudio" or whatever they call it (CY4604) - the > CyUSB.sys generic driver. I can never find it on their TERRIBLE website, > but it's there somewhere. > > The FPGA -> FX2 link is trivial. The rest of the development might be > harder.Article: 111646
Marc Reinig wrote: > Ray, > > >>... under 13 msec including raster order input and output for an imaging >>application > > > Does this include the transfer time to and from the source/destination, or > only the calculation time? > > Marc Reinig > > Laboratory for Adaptive Optics Fax > It includes transfer in from the source, computation of the 2D FFT and transfer out to the sink.Article: 111647
Martin Thompson wrote: > All this applies in spades to RAM blocks, most of the time, all I want > is an address bus (or two), a write enable and a data bus (or two). > The Xiinx way forces me to instantiate specific sizes of blocks, which > change from generation to generation, when all I want to do is say I > need a 2Kx8 RAM. Martin, starting with Virtex4, the same RAMB16 primitive is used for all variants of the block RAM. It is parameterized with generics, which makes it a lot easier to instantiate a RAM that is sized according to the need. I find it still needs a wrapper, but at least that wrapper doesn't have to contain primitives with every combination of port sizes. I use a wrapper that automatically generates a RAM array with the appropriately sized ports on individual BRAMs based on the widths of the data ports and address ports. It also hides the parity bit/data bit distinction, plus it gives an easy method to porting to a different family (replace the wrapper).Article: 111648
Hi All, I started using a microblaze design that included the FPU. As a test that the FPU is working I created a small application that devided the number 3.0 with 6.0 for a result of .5 and then displayed the result on a hyperterminal window. I was hoping to see a result of 0x3F000000 for a single precision 32 bit value but instead I see a result of 0x3FE00000. This would be correct (with an extra 8 zeros) if I was using a double precision 64bit FPU. I then decided to create a float variable with its value initialized to 0.5 and displayed it. The result was the same leading me to believe that the problem lies elsewhere? float x = 0.5; putnum (x); // this displays 3FE00000 Can someone enlighten me... I'm a tad bit confused... ThanksArticle: 111649
avishay wrote: > I'm designing with Altera FPGA with their Quartus software. My company > also have license for Mentor Graphics' Precision synthesis tool. From a > very brief check, it seems that Quartus' built-in synthesizer gives > comparable results to the Precision. I wonder if there is any advantage > in using an external synthesis tool. What does it give me more than > Quartus has to offer? Since you have the licenses, consider running your code on both to verify vendor independence. Quartus might save time in synthesis/place+route since it is a single tool. -- Mike Treseler
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z