Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Hi, Can you verify the following: 1. %xilinx%coregen\ip\xilinx\other\com\xilinx\ip ------------ This folder should have a folder mig_v1_6 2. %xilinx%coregen\ip\xilinx\other --------------------------- This folder should have mig_v1_6.xml file it has all the settings needed by coregen. MIG has been working well with Coregen. Make sure to also check the Xilinx answer record 23787 - it talks about the ISE versions required for MIG 1.6. Let us know how this goes. Thanks, Nagesh RobertP. wrote: > Your name wrote: > > Hi there, > > > > I've recently installed the Xilinx ISE toolset so that I can play with the > > Memory Interface Generator. I followed the readme and installed ISE 8.1i, > > 8.1i_SP3 and then MIG 1.6. When I open CORE Generator and select MIG from > > the drop down list nothing happens. I get the "Customise" and "View Data > > Sheet" links but when I click on them nothing happens. > > > > Is there another download that I've missed or is the Xilinx software just > > flaky? > > > What family are you targeting? > > I think there is a mistake in mig_v1_6.xml file. > If you view it, you can find a following line: > <family>virtex5</family> > > In version 1.5 it is: > <family>spartan3 spartan3e virtex4</family> > > Some time ago I tried to add other families to a 1.6 version .xml file > and it worked fine for Spartan3. > > -- > Regards > RobertP.Article: 111276
Hi there, I found one blog which is written by Lattice's staff. It sounds like a "un-official" channel to provide feedback or ask question to Lattice. Here is the URL. http://www.latticeblogs.typepad.com/ Have fun. MKArticle: 111277
KJ, You like standards. We just finished implementing PCIexpress. When I look at the complexity of that standard, I just cringe. I cannot fathom why one needs so much stuff to communicate data. But then I am old frugal and basic guy who believes in simplicity. Talking about a FIFO, what other standard interface do you want, except data-in, data-out, 2 clocks, 2 enables, 4 flags and perhaps a reset? Isn't that about as generic as it can get? Why would Altera do it differently, except that they don't have a hard-coded 550 MHz one... :-( I vote for smarter synthesis tool that interprete your intentions in the best possible way. Peter Alfke On Oct 31, 12:33 pm, "KJ" <Kevin.Jenni...@Unisys.com> wrote: > Peter Alfke wrote: > > Real progress comes from better integration of popular functions. > > That's why we now include "hard-coded" FIFO and ECC controllers in the > > BlockRAM, Ethernet and PCIe controllers, multi-gigabit transceivers, > > and microprocessors.None of that is precluded, I'm just saying that I haven't heard why it > could not be accomplished within a standard framework. Why would the > entity (i.e. the interface) for brand X's FIFO with ECC, Ethernet, > blah, blah, blah, not use a standard user side interface in addition to > the external standards? Besides facilitating movement (which is not > the only concern) it promotes ease of use in the first place. > > > Clock control with DCMs and PLLs, as well as > > configurable 75-ps incremental I/O delays are lower-level examples.I agree, those are good examples of some of the easiest things that > could have a standardized interface....although I don't think you > really agree with my reading of what you wrote ;) > > > These features increase the value of our FPGAs, but they definitely are > > not generic.I said standardized not 'generic'. I was discussing the interface to > that nifty wiz bang item and saying that the interface could be > standardized, the implementation is free to take as much advantage of > the part as it wishes. > > > > > If a user wants to treat our FPGAs in a generic way, so that the design > > can painlessly be migrated to our competitor, all these powerful, > > cost-saving and performance-enhancing features (from either X or A) > > must be avoided. That negates 80% of any progress from generation to > > generation. Most users might not want to pay that price.My point was to agree on a standard interface for given functionality > not some dumbed down generic vanilla implementation of that function. > > To take an example, and using your numbers, are you suggesting that the > performance of a Xilinx DDR controller implemented using the Wishbone > interface would be 80% slower than the functionally identical DDR > controller that Xilinx has? If so, why is that? If not then what > point were you trying to make? > > > > > And remember, standards are nice and necessary for interfacing between > > chips, but they always lag the "cutting edge" by several years.I don't think any of the FPGA vendors target only the 'cutting edge' > designs. I'm pretty sure that most of their revenue and profit comes > from designs that are not 'cutting edge' so that would give you those > 'several years' to get the standardized IP in place. > > > Have > > you ever attended the bickering at a standards meeting?...Stop bickering so much. The IC guys cooperate and march to the > drumbeat of the IC roadmap whether they think it is possible or not at > that time (but also recognizing what the technology hurdles to get > there are). There is precedent for cooperation in the industry. > > > Cutting edge FPGAs will become ever less generic.Again, my point was standardization of the entity of the IP, not > whether it is 'generic'. > > > That's a fact of life, and it helps you build better and less costly > > systems.But not supported by anything you've said here. Again, my point was > for a given function, why can't the interface to that component be > standardized? Provide an example to bolster your point (as I've > suggested with the earlier comments regarding the Wishbone/Xilinx DDR > controller example). > > KJ > > KJArticle: 111278
KJ, This is actually a fairly common usage model for the Xilinx dual port RAMs. It lets you, for example store two words per clock on one port and read them one word per clock on the opposite port at perhaps a faster clock rate. The data width and address width vary inversely so that there are always 18k or 16K bits in the memory (18K for the widths that support the parity bit). For example, if you set one port for 36 bit width, that port has a depth of 512 words. If you then set the other port for 18 bit width, it has a 1K depth, and the extra address bit (the extra bits are added at the lsbs) essentially selects the low or high half of the 36 bit width for access through the 18 bit port. Similarly, a 9 bit wide port is 2K deep and accesses a 9 bit slice of that 36 bit word for each access, with the slice selected with the 2 lsbs of the 9 bit wide port's address. I've found the easiest way to deal with the dual port memories is to instantiate the primitives. Xilinx has made it far easier with the virtex 4 which has a common BRAM element for all aspect ratios with generics on it to define the width. Previously, you needed to instantiate the specific primitive with the right aspect ratios on each port. I found it easiest to develop a wrapper for the memory that uses the width of the address and data to select the BRAM aspect ratio and instantiate as many as are needed to obtain the data width, that way the hard work is done just once. This is especially true with the older style primitives.Article: 111279
There is also a way to use the two ports as two completely independent half-size RAMs, by making sure thet the two ports never overlap their addressing. The division does not even have to be 50:50 and the widths can differ, as can clock and enables. There is some room for creativity... Peter Alfke On Oct 31, 6:49 pm, Ray Andraka <r...@andraka.com> wrote: > KJ, > > This is actually a fairly common usage model for the Xilinx dual port > RAMs. It lets you, for example store two words per clock on one port and > read them one word per clock on the opposite port at perhaps a faster > clock rate. The data width and address width vary inversely so that > there are always 18k or 16K bits in the memory (18K for the widths that > support the parity bit). For example, if you set one port for 36 bit > width, that port has a depth of 512 words. If you then set the other > port for 18 bit width, it has a 1K depth, and the extra address bit (the > extra bits are added at the lsbs) essentially selects the low or high > half of the 36 bit width for access through the 18 bit port. Similarly, > a 9 bit wide port is 2K deep and accesses a 9 bit slice of that 36 bit > word for each access, with the slice selected with the 2 lsbs of the 9 > bit wide port's address. > > I've found the easiest way to deal with the dual port memories is to > instantiate the primitives. Xilinx has made it far easier with the > virtex 4 which has a common BRAM element for all aspect ratios with > generics on it to define the width. Previously, you needed to > instantiate the specific primitive with the right aspect ratios on each > port. I found it easiest to develop a wrapper for the memory that uses > the width of the address and data to select the BRAM aspect ratio and > instantiate as many as are needed to obtain the data width, that way the > hard work is done just once. This is especially true with the older > style primitives.Article: 111280
I think you are right that sampling and then filtering does not buy you more S/N ratio. However, I'm not sure if I explained the problem correctly. The analogy would be more similar to if you had a noisey dotted line that goes down the middle of the road. You want to follow the line as quickly as possible, but your car only moves at 200 Hz. So you can either decimate your observations of the dotted line to 200 Hz or filter to 200 Hz. Maybe if I understood what decimation did in fourier space? Is decimation like sampling? Is it like applying a cut-off frequency, and so the nyquist theorem applies? does it need it's own anti-aliasing filters? Thanks for this discussion! I think I've made a lot of headway in understanding. If I can write a filter that filters once or twice and gets most of the work done by decimation, this will save a lot of gates on my fpga. Also, thanks for the choise of textbooks. The book I've been consulting is good at almost telling you how it works, but then they give example code in MATLAB, which is too meta and you never see anything! Will On Oct 31, 7:35 pm, "John_H" <newsgr...@johnhandwork.com> wrote: > First addressing the broader question: > The A/D converter will have a noise floor specified in dBFS. This noise > floor is what you would integrate over the filtered bandwidth to determine > the total Signal to Noise Ratio. If you use a much higher bandwith, the > total noise power will increase proportionately but you filter that back out > and you're back to where you started. Typically the moise increases when > you start to approach the device limits; using a 1 MS/s A/D at 1 kS/s versus > 5 kS/s may make no difference but 500 kS/s to 800 kS/s may be noticeable. > > At particularly low frequencies, you may see an uptick in the noise floor as > other noise sources start to invade your signal chain such as schott noise. > > On another subject, decimation could be a good thing. Imagine being able to > evaluate your position on the road as you're driving at 60 frames per second > video rates. Do you need to adjust your steering at 60 Hz? No. Your > adjustments are partly dictated by the response of your system. There's no > sense in providing 200 kS/s of adjustments to a system with 200 Hz response. > You can, but there will be no gain as driving isn't improved with faster > corrections. > > - John_H > > <will.pa...@gmail.com> wrote in messagenews:1162334446.598731.231140@m7g2000cwm.googlegroups.com... > > > Thanks for your reply. It does seem like I sould start with an easy > > filter first and work up to the filter I want to built. > > > Something Tim Wescott said perplexes me, though. Filtering and then > > decimating does sound like it will be computationally cheap, but I > > wonder just what is the advantage of decimating a control variable? > > > I'm fixed at 200kHz sampling, and the process I'm interested in > > controling is at 200 Hz. Having 200 kHz bandwidth makes the noise > > larger, (it should be a constant in volts per root hertz, so more > > bandwidth means larger noise in volts). It seems to me that filtering > > to 200 Hz is the correct way to reduce the bandwidth and thus reduce > > the noise in volts. Minimizing the noise is an important concern for > > controling the process quickly. > > > It's not clear to me (and I'm sure this is my fault for not being > > better informed) that decimating is an equivalent way of reducing the > > bandwidth. What effect will decimating have on noise as expressed in > > volts? > > > And this brings up a broader question I've hand in the back of my mind. > > Is it better to sample as high as possible and then filter down to the > > desired bandwidth? Or is it better to sample as needed, and the gains > > of sampling like mad and filtering like crazy are minimal. Forget the > > practical aspects of limited memory and large files. > > > I do have a hardware anti-aliasing filters. Your notes made some good > > points, anti-aliasing is always an important concern! > > > WillArticle: 111281
Peter Alfke wrote: > KJ, You like standards. > We just finished implementing PCIexpress. When I look at the complexity > of that standard, I just cringe. I cannot fathom why one needs so much > stuff to communicate data. But then I am old frugal and basic guy who > believes in simplicity. Could this have been made any faster, by relaxing some of the standard ? (and would that have a cost, like interopability ?) -jgArticle: 111282
will.parks@gmail.com wrote: > Thanks for your reply. It does seem like I sould start with an easy > filter first and work up to the filter I want to built. > > Something Tim Wescott said perplexes me, though. Filtering and then > decimating does sound like it will be computationally cheap, but I > wonder just what is the advantage of decimating a control variable? > > I'm fixed at 200kHz sampling, and the process I'm interested in > controling is at 200 Hz. Having 200 kHz bandwidth makes the noise > larger, (it should be a constant in volts per root hertz, so more > bandwidth means larger noise in volts). It seems to me that filtering > to 200 Hz is the correct way to reduce the bandwidth and thus reduce > the noise in volts. Minimizing the noise is an important concern for > controling the process quickly. > > It's not clear to me (and I'm sure this is my fault for not being > better informed) that decimating is an equivalent way of reducing the > bandwidth. What effect will decimating have on noise as expressed in > volts? > > And this brings up a broader question I've hand in the back of my mind. > Is it better to sample as high as possible and then filter down to the > desired bandwidth? Or is it better to sample as needed, and the gains > of sampling like mad and filtering like crazy are minimal. Forget the > practical aspects of limited memory and large files. > > I do have a hardware anti-aliasing filters. Your notes made some good > points, anti-aliasing is always an important concern! > > Will > > Decimating is different than bandwidth reduction. Decimating is basically lowering the sample rate to something less that is still sufficiently sampled to convey the bandwidth of the signal. For example, if oyur signal has a bandwidth of 200 Hz, it can be fully reconstructed with sample rates as low as 400 Hz (although the anti-aliasing filters will approach an ideal brick-wall filter as you approach 400 Hz sample rate, and therefore get tougher to design as your sample rate approaches the nyquist rate for the signal). Now, if your signal that is sampled at 200KHz has other noise or other signals that you are not interested in, you need to filter those out before you decimate, otherwise those out-of-band signals will fold into your band of interest (aliasing) when you decimate the sample rate. Whether you need to filter or not depends on the signal. Oversampling has the advantage of making the analog anti-alias filters considerably easier to design, and the more the signal is oversampled the less stringent the filter characteristics are. It also spreads out the noise due to quantization and other noises in the ADC over a wider bandwidth, so after filtering you can end up with a lower effective noise floor. The disadvantage of oversampling is is greatly increases the processing load for your digital processing. In the case of an FIR filter, the steepness of the features in the filter's spectral response translate to the length of the filter required. If you try to filter the 200 Hz signal from the 200KHz signal in one step, the passband of the filter is only 1/500th of the fs/2 for the filter, which in turn means a rather long FIR filter. Also, the higher sample rate means less time per sample to compute the filtered sample; so by using a higher sample rate, you have both a greater number of multiplications to perform per sample, and less time to do it in. The brute-force approach would be to make such a filter, and then decimate by 500 after the filter. As it turns out, you don't need to do the computations for the discarded output samples. There is a structure called a poly-phase decimator that rearranges the math so that the filter works at the output rate and only performs the computations that are needed for the retained output samples. Also, in this case, it is more efficient to decimate in steps with a series of filters. The first filters in the series need only a few taps because the transition region between the passband and stopband doesn't have to be sharp, as it gets cut off by the next filter in the chain. Half-band filters are a special class of FIR filters that have a response that is anti-symmetric about fs/4 and have nearly half the coefficients set to zero. For more details, google multi-rate filtering. For high decimation ratios like you have here (500:1), there is yet another filter structure that is quite helpful called the CIC or cascaded-integrator-comb filter, also sometimes known as a Hogenaur filter for the inventor. That filter is a multiplier-less filter that is a recursive implementation of a boxcar filter (a moving average without the divide by N). If I were designing your filter, I would use a CIC filter to decimate by 125 or 250 to get the rate down, and follow that with one or two decimate by 2 FIR stages, with the second FIR being the one that defines the actual shape of the passband. The first FIR filter is probably no more than about 15 taps, and can be a halfband filter. The second is typically less than 100 taps, but depends highly on the shape of the passband relative to the output sample rate.Article: 111283
will.parks@gmail.com wrote: > I think you are right that sampling and then filtering does not buy you > more S/N ratio. > Depends on the source of the noise. It isn't going to help if you have wideband noise in the signal. However, if the noise is due to quantization in the ADC, then by decimating you gain effective ADC bits. This is sort of like averaging the error on many samples to get a better estimate of one sample assuming the input hasn't changed significantly over the many samples. You can also shape the noise induced within the system by employing feedback to push the system noise into the stopband of your filter. This is the principle behind delta-sigma converters. The classic advantage to oversampling, as I mentioned before is that it relaxes the specification for your anti-alias filters, making them much cheaper to implement.Article: 111284
rickman wrote: > aravind wrote: > >>Hi >> Im planning to design an SPDIF receiver for implementation on >>Spartan 3 FPGA , But im not sure how to go about the design,Does any >>one have ideas ? >>Thank u > > > I did a quick Google search and found some very informative entries at > Wikipedia. To get the low level format description I had to click > through to the AES/EBU description and even more detail is available at > one of the references given. > > This should not be a difficult design to figure out, but there are > details that require attention depending on your application. One big detail is are you just receiving the data, or are you also recovering a clock to feed to downstream codecs?Article: 111285
Good news: the "spectre of metastability" appears to be on the wane. I live less than a mile from Xilinx HQ, yet on Halloween night, not a single person dressed as metastability showed up at my door for candy (the closest was a kid in a Napoleon Dynamite getup). Of course, real metastability is still out there. As Jack Nicholson would say, act accordingly. Bob Perlman Cambrian Design Works http://www.cambriandesign.comArticle: 111286
will.parks@gmail.com wrote: (top posting fixed) > Tim Wescott wrote: > -- snip -- > Thanks for your reply. It does seem like I sould start with an easy > filter first and work up to the filter I want to built. > > Something Tim Wescott said perplexes me, though. Filtering and then > decimating does sound like it will be computationally cheap, but I > wonder just what is the advantage of decimating a control variable? It reduces computational load in two ways. First, and obviously, it reduces the frequency that you have to recompute the loop. This is a big deal if you're using a microprocessor, not so much with an FPGA. Secondly, it reduces the precision requirements, and hence data path widths, in any internal filters in your controller. Controllers are almost invariably implemented with IIR filters, and the closer the filter's poles and zeros are to 1 the more precision one needs to implement them properly. If you downsample by a factor of 1000 you theoretically reduce your data path widths by 10 bits. > > I'm fixed at 200kHz sampling, and the process I'm interested in > controling is at 200 Hz. Having 200 kHz bandwidth makes the noise > larger, (it should be a constant in volts per root hertz, so more > bandwidth means larger noise in volts). It seems to me that filtering > to 200 Hz is the correct way to reduce the bandwidth and thus reduce > the noise in volts. Minimizing the noise is an important concern for > controling the process quickly. Ooh ouch. You're confusing sampling rate with bandwidth. Please read that paper on sampling that I linked to -- you need it. By "controlling at 200Hz" do you mean that the process has a natural bandwidth of 200Hz before you control it, that you designing things to have a 200Hz bandwidth after you control it, or that for some reason you're constrained to sampling at 200Hz on the output side? These are three very different questions, all of which have the answer "200Hz". Having a 200kHz bandwidth doesn't necessarily make the noise larger -- it depends on the source of the noise. And you seem to be confusing sampling rate with bandwidth. They _aren't_ the same. If you're sampling with a typical SAR A/D converter then the converter's front end has a bandwidth that's well above the sampling rate, and it has a noise level that's well above thermal noise levels. One of these A/D converters will pretty much give you the same noise statistics per sample whether you sample it as fast as it'll go, or once every year. In this case oversampling and filtering _after_ the A/D conversion _will_ reduce noise. > > It's not clear to me (and I'm sure this is my fault for not being > better informed) that decimating is an equivalent way of reducing the > bandwidth. What effect will decimating have on noise as expressed in > volts? No, decimating by itself doesn't reduce bandwidth, and it won't reduce noise at all. The effect of decimating is to just resample the sampled data. To effectively decimate you need low-pass filter first. This is anti-aliasing, but it isn't usually stated as such -- people usually say "filter and decimate". I dunno why -- it's just common usage. > > And this brings up a broader question I've hand in the back of my mind. > Is it better to sample as high as possible and then filter down to the > desired bandwidth? Or is it better to sample as needed, and the gains > of sampling like mad and filtering like crazy are minimal. Forget the > practical aspects of limited memory and large files. It depends on your A/D converter. With a typical SAR converter, if noise is everything you should sample like mad, filter and decimate. In fact, a sigma-delta converter does just that. > > I do have a hardware anti-aliasing filters. Your notes made some good > points, anti-aliasing is always an important concern! > > Will > > -- Tim Wescott Wescott Design Services http://www.wescottdesign.com Posting from Google? See http://cfaj.freeshell.org/google/ "Applied Control Theory for Embedded Systems" came out in April. See details at http://www.wescottdesign.com/actfes/actfes.htmlArticle: 111287
Hi, it seems obvious, but I'll ask the question anyways to be sure. My dual port ram on a Spartan 3 does not use the rst or ena inputs, nor have but one clock (clka). On the schematic view, it shows these pins going nowhere. Can I assume this is "shorthand" for "goes to pullup"? The documentation on the embedded rams says that pullup of unused inputs is recommended, because "it takes no routed resources", vs. zero driven inputs. This implies there is a special bit to pull up right at the input. Does the unconnected pin on the schematic represent this? Thanks, Scott MooreArticle: 111288
Peter Alfke wrote: > Have you tried the distributors, including DigiKey? Digi-Key has NO stock of 5 V Spartan parts. They will gladly order some if I place an order for only 60 parts. That's a $2178 order! This is what I get from a number of "stocking" distributors. Digi-Key dropped the 5 V Spartan line about 2001, I think. Somebody else mentioned the Xilinx store. They don't sell chips, period, as far as I can see. They have links to their franchised distributors, but there is no link at all for Spartan, or Spartan 2 or 2E. Only Spartan 3 and 3E. JonArticle: 111289
John_H wrote: > "Uwe Bonnes" <bon@hertz.ikp.physik.tu-darmstadt.de> wrote in message > news:ei8iuu$elg$1@lnx107.hrz.tu-darmstadt.de... > >>Peter Alfke <peter@xilinx.com> wrote: >> >>>Have you tried the distributors, including DigiKey? >>>Peter Alfke >> >>For the 5 Volt Xilinx series, none of the part is on stock at Digi, >>and all have quite high minimum order >> >>-- >>Uwe Bonnes bon@elektron.ikp.physik.tu-darmstadt.de >> >>Institut fuer Kernphysik Schlossgartenstrasse 9 64289 Darmstadt >>--------- Tel. 06151 162516 -------- Fax. 06151 164321 ---------- > > > > > XCS30XL-4TQG144I (or C) are both in stock at Digikey but the -3 speed grades > have the 60pc minimum order. Can you get by with the different speed grade? > > Spartan XL is not the 5 V Spartan, it is 3.3 V, and not pin compatible with 5 V Spartan. These chips will not work without major changes to the board. I am doing that, but going to XCS2E. It is a complete from-scratch redesign. JonArticle: 111290
will.parks@gmail.com wrote: <snip> > Maybe if I understood what decimation did in fourier space? Is > decimation like sampling? Is it like applying a cut-off frequency, and > so the nyquist theorem applies? does it need it's own anti-aliasing > filters? <snip> In Fourier space, decimation folds the wideband spectrum over the lower frequency in precisely the same fashion that sampling below nyquist aliases the higher frequency content back into your band of interest. If you sample significantly below nyquist (or decimate by a large factor) the folding can happen several times. By guaranteeing you have good stopbands where the aliases fall on your bandwidth of interest, the filter-then-decimate approach works for analog and/or digital front ends. Ray's description is superb but I, too, like to visualize Fourier space. - John_HArticle: 111291
Mark wrote: > Frank van Eijkelenburg wrote: >> Hi, >> >> I have a bootloader running from internal ram (m4k blocks). I also >> build a standalone application to run from external ram. The >> application is a .bin file which is sent to the bootloader through a >> serial interface (RS232). The bootloader copies the application to >> external ram and executes it. My application is built to run from >> address 0x00200000. If I make a .bin file from the generated .elf file >> I get a file of about 2 MB. This is because the alt_exception and >> alt_irq_handler is laid at address 0x20 and 0xEC. AFAIK this is not >> necessary. Do I have to make a linker file to fix this? Or should I >> use another startup assembly file? In case of a linker file, does >> anyone have an example for this situation? >> >> So the bootloader runs from internal ram (with base address 0) and the >> application runs from external ram (with base address 0x00200000). > > I'm assuming you're producing the bootloader and application as separate > programs and that the bootloader isn't using interrupts. > > In SOPC Builder, set the reset address to 0 and set the exception address > to 0x00200000. When you extract the .bin file from the elf file, exclude > the .reset section so you don't get the generated code at 0x0. > > Mark > Actually, I am using interrupts. Changing the reset address and exception address in SOPC builder is not an option. Each time I build the bootloader or application I have to check if these addresses are correct. Is there another option like using an linker script? For the microblaze I know you can specify the start address. Is there a same option for the Nios 2 software? TIA, FrankArticle: 111292
Jeff Cunningham wrote: > rickman wrote: > > aravind wrote: > > > >>Hi > >> Im planning to design an SPDIF receiver for implementation on > >>Spartan 3 FPGA , But im not sure how to go about the design,Does any > >>one have ideas ? > >>Thank u > > > > > > I did a quick Google search and found some very informative entries at > > Wikipedia. To get the low level format description I had to click > > through to the AES/EBU description and even more detail is available at > > one of the references given. > > > > This should not be a difficult design to figure out, but there are > > details that require attention depending on your application. > > One big detail is are you just receiving the data, or are you also > recovering a clock to feed to downstream codecs? I want to receive the data from spdif and convert it to I2S format to shift out the audio data to a dac such as cs4334.How do i sample the spdif signal ,synchronize it and convert the biphase mark code to normal 16 bit audio data in a register. I've done all the basic stuff in vhdl like multiplexed 7 segment display drivers,simple programs on picoblaze processor,etc.Article: 111293
(1) The frequency response of a scope input has nothing to do with the input impedance. The frequency response of a scope is measured by driving the input with an ideal 50 Ohm signal generator. The input impedance matters little, and with most very high bandwidth scopes the input impedance is often 50 Ohms. The -3dB bandwidth is measured in this way. That is the response of the scope by itself. No probe involved. This is the ideal conditions - scope alone. (2) The input impedance of the front end of the scope is reactive, and has both a resistive component and capacitance component. Those are the figures you quote. The idea is to match/trim a probe to this input impedance. (3) When you attach a probe (say 10:1), you adjust/trim the series capacitance of the probe so that it maintains the 10:1 ratio across all frequencies. The Voltage divider is formed by the series impedance of the probe and the front end impedance of the scope. That is how the probe input impedance ends up at 10M Ohms, while the scope is 1M. However all probes are designed to work/trim against a fixed range of input impedance. So for a scope with a 15pf input, a probe that can trim against a 30-60pf input impedance would not trim right. They tell you what the input impedance is so you can select a probe that will match. (4) The 12MHz bandwidth you quote is very low. For any digital work today you need at least a 100MHz scope. Most 10:1 probes today easily go to 100MHz or even 500MHz. In your case the most limiting factor is the scope. 12MHz is 20-30 years out of date. You cannot troubleshoot modern circuitry with 5nS edge times using a scope with 12MHz bandwidth. The 5nS edge rates represent about 200MHz bandwidth. Your signals could be ringing everywhere causing massive errors in the logic and you would never even see it. You can by a 100MHz scope on Ebay for about $100. Regards, Chris.Article: 111294
On 2006-10-31, aravind <aramosfet@gmail.com> wrote: > Im planning to design an SPDIF receiver for implementation on > Spartan 3 FPGA , But im not sure how to go about the design,Does any > one have ideas ? How *not* to do it: http://projects.jusme.com/project/spdifdac (That's my design so I can say that :) HTH, -- Ian "Tamahome!!!" - "Miaka!!!"Article: 111295
I want to use the MCH OPB DDR controller (mch_opb_ddr) provided in EDK, to connect a DDR memory module to a custom processor design, which does not support the OPB (On-chip Peripheral Bus). I' going to implement the design in a Virtex-II Pro device (XUP board) Are there any modifications that I have to do to the controller produced by the EDK in order to attach it to my design? Is this usage possible, or the controller can only be attached to a Microblaze processor?Article: 111296
On Wed, 01 Nov 2006 00:20:36 GMT, "John_H" <newsgroup@johnhandwork.com> wrote: >"Uwe Bonnes" <bon@hertz.ikp.physik.tu-darmstadt.de> wrote in message >news:ei8iuu$elg$1@lnx107.hrz.tu-darmstadt.de... >> Peter Alfke <peter@xilinx.com> wrote: >>> Have you tried the distributors, including DigiKey? >>> Peter Alfke >> >> For the 5 Volt Xilinx series, none of the part is on stock at Digi, >> and all have quite high minimum order >> >> -- >> Uwe Bonnes bon@elektron.ikp.physik.tu-darmstadt.de >> >> Institut fuer Kernphysik Schlossgartenstrasse 9 64289 Darmstadt >> --------- Tel. 06151 162516 -------- Fax. 06151 164321 ---------- > > > >XCS30XL-4TQG144I (or C) are both in stock at Digikey but the -3 speed grades >have the 60pc minimum order. Can you get by with the different speed grade? > Have you tried www.findchips.com ? plenty of hits on a search for XC2S -is this the correct family?Article: 111297
"Andreas Ehliar" <ehliar@lysator.liu.se> wrote in message news:ei8e0q$avt$1@news.lysator.liu.se... > On 2006-10-31, Ben Jones <ben.jones@xilinx.com> wrote: >> In fact, register duplication rarely makes timing better; in fact in many >> high-performance pipelined designs, it can make it much worse >> (explanation >> available on demand). > > I guess I'll bite and see if my understanding is close to what you have > in mind: > > My feeling is that register duplication could worsen a design with > combinatorial logic followed by a flip flop. This means either that > the combinatorial logic has to be duplicated (which would enlarge the > design and perhaps slow down the circuit due to extra routing, or > by only duplicating the flip flop which will certainly demand extra > routing since it is normally possible to place a FF directly after a LUT > using only high speed dedicated routing. Got it in one. The "enlargement" problem isn't much of a problem, since in FPGA technology if you need to allocate a new register then you basically get the preceding LUT for free. However, it's the "extra routing" problem that's the killer. > On the other hand, I can't really see that register duplication will make > the performance much worse (unless the synthesizer makes very bad choices) Say your design is supposed to run at 400MHz (2.5ns clock period). The extra route from the combinatorial output of the LUT to the input of the "extra" register added by the replication process may be 500ps. That's 20% of your cycle budget! Often, it's more like 800ps... of course if your clock speed is only 100MHz, this is much less of an issue. There may be a few scenarios in which register duplication really is a good thing, but in my experience synthesis tools don't always find them. So I tend to just leave this "feature" turned off. Cheers, -Ben- (Whoops, off topic...)Article: 111298
"KJ" <Kevin.Jennings@Unisys.com> writes: <snip> > To have vendor independent useful modules like this, these modules > should be standardized. This is exactly the type of thing that LPM > attempted to do. LPM languishes as a standard though because it > didn't get updated to include new and useful modules. More like it was an Altera-driven "standard" that Xilinx never supported, so it never got to be vendor independent. Hmmm, maybe we should all get together and write a Xilinx LPM library.... > Presumably this is because the FPGA vendors would > rather go the Mr. Wizard path and try to lock designers in to their > parts for irrational reasons rather than enhance standards like LPM so > that designers can remain vendor neutral at design time and let the > parts selection be based on rational reasons like cost, function and > performance. > > Surely not :-) Cheers, Martin -- martin.j.thompson@trw.com TRW Conekt - Consultancy in Engineering, Knowledge and Technology http://www.conekt.net/electronics.htmlArticle: 111299
Hello, Here is a process,that decribes how to create the .cdc file for chipscope debugging.After you add and open the new .cdc file to the ise project,it opens the chipscope core inserter window.In that go to second stage window,you will 3 tabs. 1.Trigger parameters.2.Capture parameters.3.Net connections. 1.In the trigger parameters window you can take the signals,that you want to trigger with i.e at which stage you want to observe the data.Trigger width specifies the maximum number of trigger signals. 2.In the capture parameters window you will find all the setting for capturing.Default the option "Data same as trigger" is checked.So you can uncheck this field if you want to observe more data than what you give for trigger.If you uncheck this you will find "Data width" field which selects the number of signals that you observe.Next "Data depth" field specifies the number of clock cycles or samples that you want to capture the data.Then select the clock edge rising or fallling. 3.In the net connections window,you can make signal connections to the ports that you want to observe.In this window,you will find three ports(CLOCK,TRIGGER,DATA)in red color initially,when nothing is connected to the ports.If you want to make make signal connections to the ports,click modify connections button,then you will get new window showing three tabs(Trigger signals,Clock signals,Data signals) in the right side of the window.In the left side you will see all the design signals.Select the signal what you want to connect and CH number and click make connections,it will get connected automatically.After connecting all the signals click ok and click return to project navugator by saving it.Here all the channels should be connected. You can also change the previous parameters by going back to previous sections. Let me know if you have any questions. On Oct 31, 4:09 am, "nana" <nmic...@utk.edu> wrote: > Hello, > > I am using xup virtex2 pro with xc2vp30 fpga and I am trying to > send data > > between two boards. > > I do not want to use the example (xupv2p_aurora.zip), I want to > create it. > > using aurora software, I generated the aurora core and at the > command page I > > used xilperl to create the .ise file, then I used project > navigator to run the > > project. > > Using chipscope, I synthesized and inserted ila.cdc file into the > design. > > One of my problems is when I try to modify my connections in > chipscope > > inserter I don't know what I'm doing, I am a beginner in this. > Another problem > > is the data, I don't know how to deal with it, in other words I > don't know how > > and where to insert it into the design ( I am not very good in > vhdl or > > verilog). And the last problem is that I am using one board right > now to loop > > my data out and in the board, now when I use two boards how do I > deal with > > this and what file should I make changes to? > > I really appreciate your help > >
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z