Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
You don't need to put a resistor in series with the input of the Spartan if the input is 5V? I know I'm being very anal here, but I just paid quite a bit of money for a development board. I don't want to have to pay a second time, because I got careless and blew the xilinx part. Thanks.Article: 45126
Austin Lesea wrote: > The softare tools supports creation of the key files, the downloading, and > the choice of encryption for the bitstream (starting position of the first > key - 1, 2 or 3 of 6 keys) and key generation. > > All of the back doors are closed. I thought so, but wasn't sure how you got there. Thanks! > If the battery backed up ram for the keys are lost (Vbatt drops below ~ 0.4 > V), then the part will revert to the non-secure mode, and can not be > programmed with the bitstream until the keys are re-programmed. So any attempt to get to the guts wipes the keys. Very nice. To get the keys takes a *very* sophisticated set up! Patience, persistence, truth, Dr. mike -- Mike Rosing www.beastrider.com BeastRider, LLC SHARC debug toolsArticle: 45127
Hi Peter! > Hansi, you have to be more specific: > What do you mean by 50% duty cycle? How much error can you tolerate? How > much jitter can you tolerate? > Whatever you do, the signal will never be perfect. Just feeding a signal > through any type of buffer introduces duty-cycle distortion and jitter. > Can you tolerate 10 picosecods, 50 picoseconds, or 100 picoseconds, or > more ? > > Your clock period is 12 500 picoseconds. Give us an error budget... This is a difficult question, because I don't have any experience, what will introduce too much distortion and what will not. Could you please tell me some possible ways (or point me to literature) how to get the clock with the aid of an FPGA and what jitter these methods generate. I'll then compare these methods and some others with off-chip oscillators/PLLs/... and try to get a good trade-off between accuracy and complexity. Thanks HansiArticle: 45128
Hi Ray! > Depends on how much jitter is acceptable in your application. JItter at > the ADC clock translates to noise in your system. A DLL adds some > jitter to your clock, so it is usually not appropriate to use a clock > that has gone through a DLL to clock an ADC when you are sampling IF or > RF signals. Ah, I see. Thanks for the explanation! My ADC will run at full speed all the time, 80MSPS. But my project will be an oscilloscope, no communication system. It will not be involved in signal processing. Despite that, I thought to try an AM, FM, QAM, QPSK, ... demodulator inside the FPGA. For signal post processing, as other DSOs do. But this is just a game, to exercise what we've learnd at the university last semester. > We try to use an externally generated clock for clocking both the ADC > and the FPGA. That idea I had too. This would save one FPGA IO pin (which are getting less and less at the moment. :-) ). Does a DLL perform that bad compared to a standalone oscillator regarding jitter? > The RAM is not nearly as critical, and can be clocked through the DLL > (for that matter, you can use the DLLs to generate a different clock for > the RAM so that you can run the RAM close to it's maximum clock rate in > order to maximize the bandwidth. I plan to use a 512kx36 ZBT SRAM. They are available (e.g. from IDT) with 100..160MHz clocks. Another idea was to use a standard PC RAM. PC100 DIMM, PC166 DIMM, ... Therefore I have a far wider data path (64 or 72 bit) which I can use for 4 ADC (each 10 bit) _and_ 24 or 32 digital inputs (logic analyser). But the SDRAM access has to be formulated in VHDL. :-) I will use the DLL to "shift" the signal edges in time to access the RAM (instead of shifting the clock) if it will be necessary. Thanks HansiArticle: 45129
Goran Bilski wrote: > > Hi, > ... > > The quickest way is to use LCC but it has some fee if you want to use it > commercial and > it doesn't produce high quality code like gcc. It is very fast to port to a > new target using LCC. I have recently met an FPGA guy in an HPC congress. He is simply puzzled about the time/speedup figures in papers. He cannot believe that people is using PC clusters just to get N* speedup with N* power consumption. His paper dealt on hardware/software codesign. In his conclusions, he told that you are out of luck with public domain compilers, only slightly luckier with a professional compiler, and when it comes to scalable performance you should implement your time-consuming algorithms in hardware. One of his points was power consumption, of course. It seemed as if he really had looked at the code generated by the different compilers, and he had got a very very pesimistic impression about them. Is he right? Is the gcc-generated code really high quality? -javierArticle: 45130
Hi Ray, > Depends on how much jitter is acceptable in your application. JItter at the > ADC clock translates to noise in your system. A DLL adds some jitter to your > clock, so it is usually not appropriate to use a clock that has gone through a > DLL to clock an ADC when you are sampling IF or RF signals. We try to use an > externally generated clock for clocking both the ADC and the FPGA. Just a quick question as to the amount of jitter introduced by DLL. I am using an external clock (which has precision of 1 mHz). I have quadrature mixed my IF down to DC, and am sampling at 32 MHz (nominally). The way my clocks are set up are that the external clock goes directly to FPGA (DLL), then is fed out from the FPGA to the ADC (I did it this way to solve some timing issues between the FPGA and ADC). Would you suggest otherwise? It seems to be working at the moment. adrianArticle: 45131
The output from the DLL will always be worse than the clock in terms of jitter. A DLL does not remove jitter, and by nature will always add jitter. JItter on the ADC clock translates to noise on the measured output. The noise amplitude is related to the jitter and the frequency of your input. Consider when you are sampling a sine wave. Jitter presents an uncertainty of when you sample. If you sample late by dT, then the measured signal could be off by as much as sin(dT/T) in a signal period of T. This manifests itself as ampitude noise in the sampled system. So the frequency of you input signal is more important than the sample rate as far as determining the effect of the jitter. I would imagine that in a DSO, you are going to want to avoid adding noise to the measurement signal as much as is practical, although it is not worth beating yourself up over noise that is below the ADC noise floor. If your system clock is 80 MHz, then you probably want to drive the SRAM clock from the FPGA, that way you can use the DLLs to multiply the SRAM clock so that it is running at something faster than 80 MHz. If you use the 166MHz rams, double the clock. If slower, then you can double followed by a divide by1.5 if virtexE or if virtexII, use the DCM to max out the SRAM clock. You could also use an inexpensive 32 MHz clock oscillator and multiply that by 4 inthe DLL to get a 128 Mhz SRAM clock. Johann Glaser wrote: > Hi Ray! > > > Depends on how much jitter is acceptable in your application. JItter at > > the ADC clock translates to noise in your system. A DLL adds some > > jitter to your clock, so it is usually not appropriate to use a clock > > that has gone through a DLL to clock an ADC when you are sampling IF or > > RF signals. > > Ah, I see. Thanks for the explanation! My ADC will run at full speed all > the time, 80MSPS. > > But my project will be an oscilloscope, no communication system. It will > not be involved in signal processing. Despite that, I thought to try an > AM, FM, QAM, QPSK, ... demodulator inside the FPGA. For signal post > processing, as other DSOs do. But this is just a game, to exercise what > we've learnd at the university last semester. > > > We try to use an externally generated clock for clocking both the ADC > > and the FPGA. > > That idea I had too. This would save one FPGA IO pin (which are getting > less and less at the moment. :-) ). Does a DLL perform that bad compared > to a standalone oscillator regarding jitter? > > > The RAM is not nearly as critical, and can be clocked through the DLL > > (for that matter, you can use the DLLs to generate a different clock for > > the RAM so that you can run the RAM close to it's maximum clock rate in > > order to maximize the bandwidth. > > I plan to use a 512kx36 ZBT SRAM. They are available (e.g. from IDT) with > 100..160MHz clocks. Another idea was to use a standard PC RAM. PC100 DIMM, > PC166 DIMM, ... Therefore I have a far wider data path (64 or 72 bit) > which I can use for 4 ADC (each 10 bit) _and_ 24 or 32 digital inputs > (logic analyser). But the SDRAM access has to be formulated in VHDL. :-) > > I will use the DLL to "shift" the signal edges in time to access the RAM > (instead of shifting the clock) if it will be necessary. > > Thanks > Hansi -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759Article: 45132
Mark, If I were designing a Virtex-II in, I would definitely want the latest and greatest SW. It is my experience that if you do not have the latest and greatest SW when designing the latest and greatest FPGAs, you will run into many problems. This includes non-Xilinx support SW. If you are using leading edge FPGA HW, do yourself a favor -- break down and get the latest SW. Simon Ramirez, Consultant Synchronous Design, Inc. home: Oviedo, FL USA work: Bahston, MA USA "Mark Barr" <mrbarr@charter.net> wrote in message news:uiusr1oraksj8c@corp.supernews.com... > Does Xilinx's Foundation 2.1i support the VertexII? > (I only see Vertex & VertexE as options. I think I have the latest service > packs). Am I just out of luck? > > Thanks. Mark.Article: 45133
Accurate clock and low jitter are contradicting. The lowest jitter has a quartz oscillator. I've seen a HP8662 ultra low phase noise synth wolbling around a quartz in a sub second manner. However a quartz never has exact frequency. They are always a ppm off. And they exhibit a temperature drift. And exact 50% duty is not achievable. Meaning an osc is specified as being better than 40/60. Exact frequency can be achieved with a PLL at the cost of phase noise. How many ps phase noise rms are required ? Rene -- Ing.Buero R.Tschaggelar - http://www.ibrtses.com & commercial newsgroups - http://www.talkto.net Johann Glaser wrote: > Hi! > > I'm going to build a DSO with 80 MSPS using a Xilinx FPGA (don't know > which one yet, a small one with <= 200-300 Gates). Therefore I need a very > accurate 80 MHz clock with low jitter and exact 50/50 duty cycle. > > When I feed the FPGA with an accurate 80 MHz clock and use the DLL to get > 50/50 duty cycle, will the DLL introduce additional > jitter/noise/inaccuracies? > > When I feed the FPGA with an accurate 40 MHz clock and use the DLL to > double the frequency (and yielding 50/50 duty cycle), will that intruduce > jitter/noise/inaccuracies? > > Is it better to generate an accurate clock outside without using any FPGA > tools like the DLL and distributing it to the ADCs, the RAM and the FPGA? > > Thanks > HansiArticle: 45134
Hi Ray! > The output from the DLL will always be worse than the clock in terms of > jitter. A DLL does not remove jitter, and by nature will always add > jitter. JItter on the ADC clock translates to noise on the measured > output. The noise amplitude is related to the jitter and the frequency > of your input. Consider when you are sampling a sine wave. Jitter > presents an uncertainty of when you sample. If you sample late by dT, > then the measured signal could be off by as much as sin(dT/T) in a > signal period of T. This manifests itself as ampitude noise in the > sampled system. So the frequency of you input signal is more important > than the sample rate as far as determining the effect of the jitter. I > would imagine that in a DSO, you are going to want to avoid adding noise > to the measurement signal as much as is practical, although it is not > worth beating yourself up over noise that is below the ADC noise floor. Ah, that's a great description. The jitter's noise boundary on "ADC's noise floor" gave me the hint. Now I did a small derivation: A sine wave with 40MHz is the maximum possible (allowed) input frequency. x(t) = A sin(wt) (1) The derivative gives the slope dx(t)/dt = A w cos(wt) (2) which has it's maximum at t = 2pi k. max dx(t)/dt = A w (3) The ADC's noise and error is (looking into the datasheet) below one LSB. So the jitter time dT multiplied by the maximum slope gives the niose generated bye the jitter, which has to be below the ADC's noise floor max dx(t)/dt * dT < 1 LSB (4) yielding a maximum dT dT < 1 LSB / max dx(t)/dt (5) The maximum amplitude of the input sine can reach half the ADC's input range (positive _and_ negative half) and though it is a 10 Bit ADC, the maximum amplitude is A = 512 LSB (6) Inserting (6) into (3) and both into (5) yields a maximum jitter dT of dT < 77.7ps =========== Can you agree on that? Or did I miss a factor of 2 or 1/2 somewhere? :-) > If your system clock is 80 MHz, then you probably want to drive the SRAM > clock from the FPGA, that way you can use the DLLs to multiply the SRAM > clock so that it is running at something faster than 80 MHz. If you use > the 166MHz rams, double the clock. If slower, then you can double > followed by a divide by1.5 if virtexE or if virtexII, use the DCM to > max out the SRAM clock. You could also use an inexpensive 32 MHz clock > oscillator and multiply that by 4 inthe DLL to get a 128 Mhz SRAM clock. My current design only needs a 80 MHz RAM clock. There are two periodes. 1) Sampling: all the time data is written into the SRAM at full speed (80 MHz). When the trigger arrives, the sampling is continued until a certain amount of samples _after_ the event are stored. Then sampling is stopped. When sampling rate is <= 20MHz, (which is the next lower speed) for each sample 3 values are stored (min/max/avg) to avoid aliasing. This happens in ascending clock cycles, at full speed, but then there is a 1-79997 cycle idle time for the RAM. During this the FPGA determines the min/max/avg of the samples coming in from the ADC at full speed. 2) Readout: no sampling is done. The local 8-bit microcontroller reads out the RAM with the aid of the FPGA (at 12MHz or less), byte by byte, the RAM is only accessed each (8 bit)/(36 bit)'th CPU read cycle. Bye HansiArticle: 45135
Hi! > Accurate clock and low jitter are contradicting. The lowest jitter has a > quartz oscillator. I've seen a HP8662 ultra low phase noise synth > wolbling around a quartz in a sub second manner. > > However a quartz never has exact frequency. They are always a ppm off. > And they exhibit a temperature drift. And exact 50% duty is not > achievable. Meaning an osc is specified as being better than 40/60. I see. The ADC is able to handle duty cycles between 45/55 to 55/45. > Exact frequency can be achieved with a PLL at the cost of phase noise. > How many ps phase noise rms are required ? When the noise, the sampling jitter introduces shall be below 1 LSB I can accept a maximum of 77.7ps. Bye HansiArticle: 45136
Hi all Microblaze actually run so fast(150Mhz) , but I do not think they are high performence, because they are base on stack, not load/store structure in real risc, so they must use more instruction's to do the same thing, Goran Bilski <goran.bilski@xilinx.com> wrote in message news:<3D2EF70C.A9DFC27@xilinx.com>... > Hi, > > A good start point for CPU on a FPGA is www.fpgacpu.org > > Since I design the MicroBlaze it's very possible to do high performance 3 > 2-bit > RISC in fpga. (150 MHz in VIIPro) > The distributed RAM in xilinx FPGA make it easy to do a register file. > > G?an > > res19j1c wrote: > > > Hi, > > I'm new to FPGAs. I just have a few questions. I'm sure some of these h > ave > > been asked already but I've been using the google archive to search old > > > posts and couldn't find anything. > > > > 1.) Is it correct that I could design a CPU on a FPGA? > > > > 2.) It seems that the ability to create a certain design is measured in > > > gates. So the more complex the design, the more gates it needs. Is this > > > true, or at least partly true? > > > > 3.) If so, how many gates would it take to implement a CPU with > > functionality about equivalent to an Intel IA-32 style chip (80386 - 80 > 686). > > > > 4.) About how many MHz would this run at on a newer FPGA such as the > > Virtex-II?Article: 45137
"Johann Glaser" <Johann.Glaser@gmx.at> schrieb im Newsbeitrag news:agpa7q$ni06j$1@ID-115042.news.dfncis.de... > Hi Ray! > > > The output from the DLL will always be worse than the clock in terms of > > jitter. A DLL does not remove jitter, and by nature will always add > > jitter. JItter on the ADC clock translates to noise on the measured > > output. The noise amplitude is related to the jitter and the frequency > > of your input. Consider when you are sampling a sine wave. Jitter > > presents an uncertainty of when you sample. If you sample late by dT, > > then the measured signal could be off by as much as sin(dT/T) in a > > signal period of T. This manifests itself as ampitude noise in the > > sampled system. So the frequency of you input signal is more important > > than the sample rate as far as determining the effect of the jitter. I > > would imagine that in a DSO, you are going to want to avoid adding noise > > to the measurement signal as much as is practical, although it is not > > worth beating yourself up over noise that is below the ADC noise floor. > > Ah, that's a great description. The jitter's noise boundary on "ADC's > noise floor" gave me the hint. Now I did a small derivation: > > A sine wave with 40MHz is the maximum possible (allowed) input frequency. > x(t) = A sin(wt) (1) > The derivative gives the slope > dx(t)/dt = A w cos(wt) (2) > which has it's maximum at t = 2pi k. > max dx(t)/dt = A w (3) > > The ADC's noise and error is (looking into the datasheet) below one LSB. > So the jitter time dT multiplied by the maximum slope gives the niose > generated bye the jitter, which has to be below the ADC's noise floor > max dx(t)/dt * dT < 1 LSB (4) > yielding a maximum dT > dT < 1 LSB / max dx(t)/dt (5) > > The maximum amplitude of the input sine can reach half the ADC's input > range (positive _and_ negative half) and though it is a 10 Bit ADC, the > maximum amplitude is > A = 512 LSB (6) > Inserting (6) into (3) and both into (5) yields a maximum jitter dT of > dT < 77.7ps > =========== > > Can you agree on that? Or did I miss a factor of 2 or 1/2 somewhere? :-) Hello, the formula was correct, but the numeric calculation is one order of magnitude wrong. Summary: Condition: Amplitude error <= 1 LSB of a N-Bit ADC. Fsig is the highest frequency of your signal. Formula: dT < 1 /(2*pi*Fsig) * 1/(2^(N-1)) Example: ADC: 10Bit, Fsig 20MHz, 40MHz dT < 7.8ps @40Mhz dT < 15.8ps @20MHz This values cannot be achieved when running a signal through a FPGA where signals with other frequencies are switching. Even if you only feed the clock through the FPGA, you will have 10 to 50 times the required jitter. You could improve that by some amount when running the clock_in,out differentially. Using an external crystal oszillator is the best way to go. Best Regards HelmutArticle: 45138
Hi! > ADC: 10Bit, Fsig 20MHz, 40MHz > dT < 7.8ps @40Mhz > dT < 15.8ps @20MHz Ooops, shame on me. Forgot an additional zero. :-( I never thought that picoseconds will be important for me. Wow. :-) > This values cannot be achieved when running a signal through a FPGA > where signals with other frequencies are switching. Even if you only > feed the clock through the FPGA, you will have 10 to 50 times the > required jitter. You could improve that by some amount when running the > clock_in,out differentially. Using an external crystal oszillator is the > best way to go. I see, thanks! So, one thing is clear now: I will _not_ use the FPGA to generate the clock. Thanks for all your kind help. Bye HansiArticle: 45139
Just wanted to add that any high performance high speed ADC needs its own differential clock circuitry to get the best from the ADC. If you are going to FFT the results from the ADC, bear in mind that your noise floor may be far less than 1 LSB due to processing gain. I'm not sure, but I don't think the phase noise on the clock will appear as a broadband noise that will also improve quite so well with processing gain or whether it will remain highly related to the analog input. Thinking of the ADC input as a mixer, I would tend to think that clock phase noise spectrum will 'add' to the analog input spectrum (probably freq multiply??) Either way, to get the best out of an ADC you would not let it anywhere near an FPGA. PS The ADC itself has some 'aperture uncertainty' or jitter associated with its sample/hold, so you'd probably RMS sum the clock phase noise with the ADC uncertainty. Take a look at www.analog.com at their ADC notes. Theres some really useful stuff on ADC clock sampling etc. Paul PS They make some excellent quality and really cheap evaluation boards that could be used directly in your own prototype systems. Definitely worth looking at.Article: 45140
There are exceptions to that, of course. For example, if I am not designing in VirtexII or IIP, I am still using M3.3sp8 because 1) the floorplanner in 4.x is seriously broken, and 2) the router in 3.3sp8 does a considerably better job than the router in 4.x with a given placement. I haven't compared routing results for VirtexII between the two versions. Based on some of the things the 4.2 router is doing in a carry-chain intensive V2 design, I suspect the 3.3 virtexII routing is also better for a given placment. Unfortunately, the 3.3 SW doesn't know about the pipeline register in the multipliers and has overly optimistic speed files for V2. "S. Ramirez" wrote: > Mark, > > If I were designing a Virtex-II in, I would definitely want the latest and > greatest SW. It is my experience that if you do not have the latest and > greatest SW when designing the latest and greatest FPGAs, you will run into > many problems. This includes non-Xilinx support SW. If you are using > leading edge FPGA HW, do yourself a favor -- break down and get the latest > SW. > > Simon Ramirez, Consultant > Synchronous Design, Inc. > home: Oviedo, FL USA > work: Bahston, MA USA > > "Mark Barr" <mrbarr@charter.net> wrote in message > news:uiusr1oraksj8c@corp.supernews.com... > > Does Xilinx's Foundation 2.1i support the VertexII? > > (I only see Vertex & VertexE as options. I think I have the latest service > > packs). Am I just out of luck? > > > > Thanks. Mark. -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759Article: 45141
Hi, if you want to generate some CPU like IA-32 architecture you expect not= any higher execution speed than 1-5 MHz since their design is very complex (MMU, branch prediction, register shadowing, out of order execution, ...). So actually it would not make much sense to try some task like this?! Anyway you will be a little short with gates to design such as well, a = i80686 has somewhat about 15 million transistors i guess. You would not = fit fit such stuff into a FPGA, would you? You could change lots of stuff from parallel execution into serial execution but this would slow the whole much. If you would try to spread= such design over a number of FPGA devices you would get trouble with speed and partitioning of the design. Doesn't seem to make so much sense to me, but perhaps I'm wrong. ;o) But why would you want to design something that already exists? It's sold in high volume and cheap cause of this fact. I would think the advantage to have the possibility to design some CPU = with some FPGA could be to try out new concepts of computing or perhaps = trying designs never implemented before, like sorts of a turing machine?= ! Well perhaps I am wrong with this as well? ;-) >>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<< On 7/12/02, 3:58:22 AM, "res19j1c" <res19j1c@verizon.net> wrote regardin= g FPGA CPU?: > Hi, > I'm new to FPGAs. I just have a few questions. I'm sure some of these = have > been asked already but I've been using the google archive to search ol= d > posts and couldn't find anything. > 1.) Is it correct that I could design a CPU on a FPGA? > 2.) It seems that the ability to create a certain design is measured i= n > gates. So the more complex the design, the more gates it needs. Is thi= s > true, or at least partly true? > 3.) If so, how many gates would it take to implement a CPU with > functionality about equivalent to an Intel IA-32 style chip (80386 - 80686). > 4.) About how many MHz would this run at on a newer FPGA such as the > Virtex-II?Article: 45142
Hello! I am designing a system with a series of identical add-in data acq. cards. Each card interfaces with a shared bus via a Spartan-II from xilinx. I am trying to avoid the expensive serial PROM configuration option, and instead would like to configure each device in slave serial mode. Each FPGA (of which there will be 1/board, or ~16) will be running identical code. Since both the /INIT and DONE lines are open-drain, can i just wire all the DIN, CCLK, /INIT, and DONE lines for each FPGA in parallel, and clock data in like it's a single FPGA? All FPGAs would need to be ready to receive the bitstream (i.e. have /INIT high) in order for the aggregate /INIT to actually be high; similar behavior would be apparent with DONE. Is there any reason why this won't work?? Thanks, SteveArticle: 45143
Hi Paul! > Just wanted to add that any high performance high speed ADC needs its > own differential clock circuitry to get the best from the ADC. > > If you are going to FFT the results from the ADC, bear in mind that your > noise floor may be far less than 1 LSB due to processing gain. > > I'm not sure, but I don't think the phase noise on the clock will appear > as a broadband noise that will also improve quite so well with > processing gain or whether it will remain highly related to the analog > input. Thinking of the ADC input as a mixer, I would tend to think that > clock phase noise spectrum will 'add' to the analog input spectrum > (probably freq multiply??) > > Either way, to get the best out of an ADC you would not let it anywhere > near an FPGA. > > PS The ADC itself has some 'aperture uncertainty' or jitter associated > with its sample/hold, so you'd probably RMS sum the clock phase noise > with the ADC uncertainty. Aaaj, I see. The data sheet says ADC jitter are 2 ps. I found really nice oscillators at http://www.mfelectronics.com/ with extra low jitter saying downto 5 ps RMS (and 25 peak-peak). They also fulfill the requirement of rise and fall time being no more than 2ns. They have really nice things there. :-) Bye HansiArticle: 45144
>in slave serial mode. Each FPGA (of which there will be 1/board, or >~16) will be running identical code. Since both the /INIT and DONE >lines are open-drain, can i just wire all the DIN, CCLK, /INIT, and >DONE lines for each FPGA in parallel, and clock data in like it's a >single FPGA? All FPGAs would need to be ready to receive the bitstream >(i.e. have /INIT high) in order for the aggregate /INIT to actually be >high; similar behavior would be apparent with DONE. Is there any >reason why this won't work?? Beware of glitches and cooties on your clock signals. Other than that, I can't see any reason it won't work. You might want some way to isolate the DONE signals for debugging so you can figure out which board isn't working right. -- The suespammers.org mail server is located in California. So are all my other mailboxes. Please do not send unsolicited bulk e-mail or unsolicited commercial e-mail to my suespammers.org address or any of my other addresses. These are my opinions, not necessarily my employer's. I hate spam.Article: 45145
> But why would you want to design something that already exists? It's > sold in high volume and cheap cause of this fact.Article: 45146
Hi all, Does anyone have any good estimates of the proportion of an FPGA's configuration memory used for defining the interconnect, or can provide me with a sensible way of estimating the number of configuration bits used by a Xilinx Virtex CLB (not including routing external to the CLB)? This information could probably be determined for Xilinx devices by someone with the JBits toolkit (and time on their hands). Has anyone done this? I would be interested in any data, but especially for Xilinx chips: XC4000X, Virtex and later architectures. Any help would be appreciated. From XAPP151 I've estimated the total number of coniguration bits per CLB and per IOB _including_ the routing, and would like to split this figure up. Regards, SteveArticle: 45147
In article <3D3079F5.8060207@bham.ac.uk>, Steve Charlwood <s.m.charlwood@bham.ac.uk> wrote: >Hi all, > >Does anyone have any good estimates of the proportion of an FPGA's >configuration memory used for defining the interconnect, or can provide >me with a sensible way of estimating the number of configuration bits >used by a Xilinx Virtex CLB (not including routing external to the CLB)? >This information could probably be determined for Xilinx devices by >someone with the JBits toolkit (and time on their hands). Has anyone >done this? I would be interested in any data, but especially for Xilinx >chips: XC4000X, Virtex and later architectures. > >Any help would be appreciated. From XAPP151 I've estimated the total >number of coniguration bits per CLB and per IOB _including_ the routing, > and would like to split this figure up. Easy way: A virtex slice contains 32 bits for LUT configuration, and ~25 bits (I may miscount by one or two) for the internal slice routing and configuration (based on the slice internals diagram for Jbits). So just subtract. So what's the number? -- Nicholas C. Weaver nweaver@cs.berkeley.eduArticle: 45148
Kevin Brace wrote: > I just checked the library guide, and it seems BUFG can only be used for > clocks. > One thing I can say about synthesis tools is that if you are trying to > do something unusual, you will likely end up, 1) Instantiating Xilinx > specific primitives in your code, or 2) You will have to edit the EDIF > netlist generated with a text editor. I'm a newbie at this: how do I go about specifying Xilinx primitives in my code? I'd like to grab hold of the LUT/RAM/shift reg block. It seems like I should be able to do both random access of any bit or shift the whole thing depending on where I am in an algorithm. I'd also like to grab the multiplier in the V2 parts. Any pointers greatly appreciated. I'm lost and confused in the Foundation doc files and several VHDL books. but I've got some hardware to beat on now, so I can at least try a few things and see what happens :-) Patience, persistence, truth, Dr. mike -- Mike Rosing www.beastrider.com BeastRider, LLC SHARC debug toolsArticle: 45149
It works, sure! It's a question of noise. You're sampling at 32MHz... is the information content 16MHz to DC? 1MHz passband? The jitter will introduce wideband noise that can get filtered out to a nice degree in the frequency domain. If one is doing time domain, there's not much filtering to be done on individual samples. Do you have any receiver sensitivity issues where you thought your minimum receive power should have been a little better? Noddy wrote: > Hi Ray, > > > Depends on how much jitter is acceptable in your application. JItter at > the > > ADC clock translates to noise in your system. A DLL adds some jitter to > your > > clock, so it is usually not appropriate to use a clock that has gone > through a > > DLL to clock an ADC when you are sampling IF or RF signals. We try to use > an > > externally generated clock for clocking both the ADC and the FPGA. > > Just a quick question as to the amount of jitter introduced by DLL. I am > using an external clock (which has precision of 1 mHz). I have quadrature > mixed my IF down to DC, and am sampling at 32 MHz (nominally). The way my > clocks are set up are that the external clock goes directly to FPGA (DLL), > then is fed out from the FPGA to the ADC (I did it this way to solve some > timing issues between the FPGA and ADC). Would you suggest otherwise? It > seems to be working at the moment. > > adrian
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z