Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
"Jean Nicolle" <j.nicolle@sbcglobal.net> wrote in message news:<xzE2b.324$wj5.10470447@newssvr13.news.prodigy.com>... > my manager said it couldn't be done. So just to prove him wrong :-) > http://www.fpga4fun.com/PWM_DAC.html It's pretty easy... I did a programmable pattern-generator board; one of the tests we cooked up was that it could play a .au file as PWM through any of its output bits. The outputs were on RS422 drivers. The test jig simply AC-coupled the outputs to a small car-stereo speaker. Not hi-fi, by any means, but it never fails to impress and amaze! --aArticle: 59901
"John" <305liuzg@163.net> wrote in message news:<bit2o5$o0$1@mail.cn99.com>... > Hi,all > I use synplify or Quartus to do the synthesis. > What files should I pass to Modelsim to do the timing simulation? The tools should spit out a .SDF file and a new Verilog or VHDL top-level file. Compile the top-level file, replacing the RTL code, and tell the simulator to apply the SDF file appropriately. For example, if your test bench is called foo_tb.v, and it instantiates your design as: mychip uut (this, that, theother); tell ModelSim that it should apply the .SDF to /uut and it should just work. You may also have to compile a library of Altera primatives. I would imagine that Altera's documentation spells all of this out in gory detail :) =aArticle: 59902
Hi Bob: guess I'm flattered that Hal is getting to defend some of the aspects here - what I am pointing out is that in a standard flop design, there are two balanced P/N nodes (assuming you're building a MOS flop - very few done otherwise anymore) that are used to jam another pair that sets the output. The balancing that takes place will only force the output if there is thermal noise - this is how Fiarchild, Intel, and most others build the things. Take a poke around the net for articles by Dike and Burton - they have done more work on metastability that almost anyone else out there, and IEEE uses their work as a standard for this stuff. I'm not the author, just parroting what others have done. The amount of noise is not a factor (this is thermal noise, and all gates exhibit about 9nV per root kT) - thanks boltzman - but the effects of Miller coupling is as of yet not well understood, at least by me. Andrew Bob Perlman wrote: >On Sun, 31 Aug 2003 10:55:19 -0400, rickman <spamgoeshere4@yahoo.com> >wrote: > > > >>Bob Perlman wrote: >> >> >>>On Sat, 30 Aug 2003 23:15:18 -0400, rickman <spamgoeshere4@yahoo.com> >>>wrote: >>> >>> >>> >>>>Hal Murray wrote: >>>> >>>> >>>>>>this has nothing to do with quantization, until you get into QED, but is >>>>>>a matter of statistical thermal noise on two cells that are used to jam >>>>>>the outputs of a flop. You need the noise, but that has nothing to do >>>>>>with undergrad quantum mechanics. Read Peter's stuff - he's quite good >>>>>>and knowledgable. >>>>>> >>>>>> >>>>>Do I need noise? Why? I thought the normal exponential decay >>>>>was well modeled (Spice?) without noise. Perhaps you need >>>>>it if the FF is "perfectly" ballanced but that has a vanishingly >>>>>small probability in the real world. >>>>> >>>>> >>>>I think you are right. There is only one point on a continuous range >>>>that will be perfectly balanced. The probability of that is in essence >>>>the inverse of infinity which I don't know that it even has meaning. >>>> >>>>If you require noise to shift you out of metastability, then the people >>>>who argue that more noise will get you out quicker could then be right. >>>> >>>> >>>A metastable failure doesn't require that you land exactly on the >>>balance point. There may be only one point that keeps you in the >>>metastable state forever, but there's a range of points that will >>>delay FF settling long enough to make your design fail. The more time >>>you give the design to settle, the shorter that range of points is. >>> >>>Accordingly, noise doesn't have to kick the FF to that perfect balance >>>point. It need only force you close enough that the FF output >>>transition is sufficiently delayed to hose over the circuit. >>> >>> >>I don't think you understand the point. We are not saying that balance >>or noise are required to demonstrate metastability. It was pointed out >>that in a simulation of the effect, something would be needed to move >>the FF off the balance point and noise was suggested. But in the real >>world the "balance point" is so vanishing small, it would never actually >>happen. That is not saying that the FF can not go metastable without >>being balanced. >> >> > >Whoever said, "If you require noise to shift you out of metastability, >then the people who argue that more noise will get you out quicker >could then be right," could you explain further? Are you saying that >noise is required to resolve the metastable state, or is this a >counter-argument to the "noise may get you out faster" claim? Or is >it something else entirely? > >Bob Perlman >Cambrian Design Works > > >Article: 59903
yes, you have to call promgen for that. Jean "colin hankins" <colinhankins@cox.net> wrote in message news:7qy2b.24498$nf3.10797@fed1read07... > Does anyone know how to concatenate two Xilinx Spartan-II FPGA bit files? > Need to put multiple bit files together because I have daisy chained > multiple fpga's. > > Thank you. > > >Article: 59904
> > > > The (simple) statistical models must fall down when they hit > >the quantization of single electrons. > > How close are we to that ? > > Has anyone tried to actually plot the tail of time/probability, > >to see what law it follows ? > > How does this actual tail vary with temperature. > Hi Jim - > > this has nothing to do with quantization, until you get into QED, but is > a matter of statistical thermal noise on two cells that are used to jam > the outputs of a flop. You need the noise, but that has nothing to do > with undergrad quantum mechanics. Read Peter's stuff - he's quite good > and knowledgable. I understand the (dominant) thermal aspects, but my physics teacher taught me that all extrapolation is dangerous :) - hence the question about the quantize effects, on the tail. Problem is, the tail is by nature hard to measure, and so mostly we get the statisical arm waving of a continuous 'vanishingly small' curve of Time/Probability. With each FPGA generation, we must be getting closer to being able to look for these more subtle effects. eg Electron charge is in region of 10^-19, and Gate charge of a MOS FET is measured in some fC/um^2 (femto = 10^-15), so in 90nm devices we should be in the 10^-17/10^-18 region. Does anyone have a real value for Qc in 90nm process ? -jgArticle: 59905
Hello, Has anyone compared FPGA implementations of full-rate digital FIR = filters based on the use of Multiplier Blocks vs. traditional FIRs with = constant coefficient multipliers? By full rate, I mean: one output = result per clock cycle and no interpolation or decimation. For anyone not familiar, a multiplier block is a network of shifters and = adders that performs multiplications by several coefficients efficiently = by exploiting common sub-expressions. The multiplier block can be = exploited in FIR filters by transposing the standard filter so that the = products of all the coefficients with the current input-sample are = required simultaneously. Also, by representing the coefficients in the Canonical-Signed-Digit = number system (a small number of +1 and -1's) along common = sub-expression sharing the multiplier block can get even smaller. For example, the multiplier block for a 100 tap FIR filter (fp=3D0.10 = and fs=3D0.12) can be realized with only 61 adds (zero explicit = multiplications). See filter example #4 in "FIR Filter Synthesis = Algorithms for Minimizing the Delay and the Number of Adders," = http://ics.kaist.ac.kr/~dk/papers/TCAD2001.pdf If the adder depth is constrained to a maximum of four, then the = authors' algorithm can do the multiplier block in 69 additions. It would seem that this approach would be very efficient in a target = such as the Xilinx Spartan-IIE (with no dedicated multipliers). Another question: If we only need one result per K clock periods (K ~=3D = 1000 for audio applications), could a multiplier block approach realized = with, say, bit-serial addition be more efficient than some other = approach such as distributed arithmetic? Comments welcome. Thanks. -Michael ______________________ Michael E. Spencer, Ph.D. President Signal Processing Solutions, Inc. Web: http://www.spsolutions.comArticle: 59906
Hi all, Lets assume I'm using a Xilinx Virtex device and I have a VHDL design that includes the following a<=b+c; Will the design tools (I happen to be using Foundation 2.1i) infer a "simple" adder or will the tools automatically infer an adder that uses the dedicated carry look ahead logic?? Will that logic be placed appropriately (i.e. like the ACC and ADD standard components that use the RLOC constraint)? Thanks for your help.Article: 59907
Hello there! Talking about ASICs, what's (in detail if possible :P ) the difference between "gate arrays" versus "full custom"? I mean price, number of gates on chip, speed. Thanks :) -- MikeArticle: 59908
On Sun, 31 Aug 2003 22:17:41 +0800, "Chen Bin" <sunwen_ling@hotmail.com> wrote: >Hi, >I am a colledge student,one of my dreams is to design a 16-bit CPU,it has >some basic functions,such as arithmatic and MMU and so on. > >But I don't have any idea how to get it.Can you give some suggestions about >it? > >I mean,what steps should I take to obtain this dream,and at each step,which >book should I read,how long will I get the dream? > >Any help appreciated!! > >This is a cry from a puzzled student for your help. > >Chen Bin Cry no more! Lucky for you, there is a web site that covers this in detail, including a worked and explained example! http://www.fpgacpu.org/ and in particular, look at the 3 PDF files at http://www.fpgacpu.org/xsoc/cc.html Have fun! Philip Philip Freidin FliptronicsArticle: 59909
"rickman" <spamgoeshere4@yahoo.com> wrote in message news:3F50A0E8.15AAA082@yahoo.com... (snip regarding the DEC KA-10, metastability, and self-timed logic) > I am no expert in async logic, but I have never heard of a circuit that > can even detect metastability. I also thought that async logic did not > "measure" the time it took for a calculation, it simply allowed > different times for different calculations. The control path for a > given circuit has a longer delay than the data path and would be > dependant on the calculation being performed. How exactly would a > circuit detect when an async calculation is complete? Well, I agree with the skepticism in the first place, but consider a CPU with lights indicating the current contents of the registers. Normally, the values will be changing very fast. If they suddenly stop changing, it could be because of unresolved metastability. The logic will wait quietly for it to resolve, and then continue. It might be, though, that the machine is in I/O wait, and there really is nothing to do. I never got to actually see the machine, but at the time many machines had console lights, at least for the instruction counter and a few other important registers. (Though I don't believe that the PDP-10 had a wait state like IBM S/360 did, where no instructions were executed.) -- glenArticle: 59910
sometimes, googling helps...... http://www-ee.eng.hawaii.edu/~msmith/ASICs/HTML/ASICs.htm#anchor5290309Article: 59911
On 29 Aug 2003 16:23:32 -0700, symon_brewer@hotmail.com (Symon) wrote: >Hi, > Before I start, metastability is like death and taxes, >unavoidable! That said, I've read the latest metastability thread. I >thought these points were interesting. > >Firstly, A quote from Peter, who has carried out a most thorough >experimental investigation :- >"I have never seen strange levels or oscillations ( well, 25 years ago >we had TTL oscillations). Metastability just affects the delay on the >Q output." > > >Secondly, from Philip's excellent FAQ :- Thanks. >"Metastable outputs can be > >1) Oscillations from Voh to Vol, that eventually stop. >2) Oscillations that occur (and may not even cross) Voh and Vol >3) A stable signal between Voh and Vol, that eventually resolves. >4) A signal that transitions to the opposite state of the pre > clock state, and then some time later (without a clock edge) > transitions back to the original state. >5) A signal that transitions to the oposite state later than the > specified clock-to-output delay. >6) Probably some more that I haven't remembered. " > > So, this got me thinking on the best way to mitigate the effects of >metastability. If Peter is correct in his analysis of his experimental >data, and I've no reason to doubt this, then Philip's option 5) is the >form of metastability appearing in Peter's Xilinx FPGA experiments. Peter's experimental data revolves around detecting metastables, and counting them to create the data we use for our calculations. Very good stuff! I too have created metastability test systems, which not only count the metastables, but also display them on an osciloscope. I would like to make a very strong distinction about the typically presented scope pictures of metastability, and the data that I have taken. What you normally see published (in terms of scope photos, as opposed to drawn diagrams) is a screen of dots representing samples of the Q output. These scopes are high bandwidth sampling scopes that typically take 1 sample per sweep, and rely on the signal being repetitive to build up a picture of what is going on. Examples are the Tek 11801 and 11803, as well as the newer TDS7000 and TDS8000 . The CSA11803 and CSA8000 are basically the same scopes with some extra software. The picture at the top of page 3 of this document is typical: http://www.onsemi.com/pub/Collateral/AN1504-D.PDF The scope is triggered by the same clock as the clock to the device under test (DUT), and the scope takes a random sample (or maybe a few samples) over the duration of the sweep. Most sweeps are of the flip flop not going metastable, and so the dots accumulate and show the trajectory of the flip flop. Occasionally the flip flop goes metastable, and sometimes the random sample occurs during the metastable time. These show up as the dots that are to the right of the solid rising edge on the left. Every dot that is not on that left edge represents times when the flip flop had a longer than normal transition time, after you take into consideration clock jitter, data output jitter, scope trigger jitter, and scope sweep jitter. All of these can be characterized by first doing a test run that does not violate the setup and hold times of the DUT. The problem with these test systems is that when you do record a metastable event, you only get 1 sample point on the trajectory and you can say very little about the trajectory, other than it passed through that point. Even when these scopes take multiple samples per sweep, they are often microseconds apart, and of little interest in the domain we are talking about here. Although the collected data is predominantly of non metastable transitions, these all pile up on top of each other as the left edge of the trace, and do not significantly detract from seeing the more interesting dots to the right. The test systems that I have designed are quite different. These test systems only collect trajectory data when the flip flop goes metastable, and they sample the DUT output at 1GSamples per second, thus taking a sample every nanosecond. The result is that the scope pictures I have show the actual trajectory of the metastable. For your viewing pleasure, I have put them up on the web: www.fpga-faq.com/Images/meta_pic_1.jpg www.fpga-faq.com/Images/meta_pic_2.jpg www.fpga-faq.com/Images/meta_pic_3.jpg These are far from just delayed outputs! The end result though is still the same, systems that fail. But seeing these scope pictures of the actual Q output might make you think about how you measure metastability. For example, on meta_pic_1.jpg lower trace, the vertical scale is 1V per division, and the 0V level is 1/2 a division above the bottom of the screen. The horizontal scale is 4ns per division. Now what if your test system took a sample at 10ns, and used a threshold of 1.5 volts (2 div up from the bottom of the picture). You would say that the signal is always high at this point. If you sampled again at 20 ns (middle of the screen, you would say that it has resolved for all the traces shown, and you would count all the transitions that returned to ground (because they were high at 10ns). All those traces that ended up high would not be counted. This would be bad if in the real system the device listening to the DUT happened to have a threshold of 2.1 volts (right in the middle of that cute little hump). This also shows why using a signal like this as a clock could be a real disaster. Knowing what the trajectory of the DUT output looks like can make you think a lot harder about how you test it. > So, bearing this in mind, a thought experiment. We have an async >input, moving to a synchronising clock domain at (say) 1000MHz. Say we >have a budget of 5ns of latency to mitigate metastability. The sample >is captured after the metastability mitigation circuit (MMC) with a FF >called the output FF. > My first question is, which of these choices of MMC is least likely >to produce metastability at the output FF? >1) The MMC is a 4 FF long shift register clocked at 1000MHz. >MMC1 : process(clock) >begin > if rising_edge(clock) then > FF1 <= input; > FF2 <= FF1; > FF3 <= FF2; > FF4 <= FF3; > output <= FF4; > end if; >end process; Basically the improvement in MTBF is a function of the slack time you give it to resolve. This is the sum of the slack time between FF1 and FF2, FF2 and FF3, FF3 and FF4, and FF4 and output. Lets throw some numbers at it. Setup time is 75ps, clock to Q is 200ps, routing delay between any pair of Q to D paths is 100ps. Clock distribution skew is 25ps (in the unfortunate direction). So we have 4 paths of 1000ps - (75+200+100+25) = 4 * 600ps = 2.4ns >2) The MMC is 4 FFs, each clock enabled every second clock. >MMC2 : process(clock) >begin > if rising_edge(clock) then > toggle <= not toggle; > if toggle = '1' then > FF1 <= input; > FF3 <= FF1; > output <= FF3; > else > FF2 <= input; > FF4 <= FF2; > output <= FF4; > end if; > end if; >end process; Ok, so this is weird, and it adds a mux :-) Transit time through muxes is 200ps (assume that getting toggle to it is a non issue), and its output connects to the D of the output FF. No extra routing delay. Path 1: slack from FF1 to FF3 plus slack from FF3 to output 2000-(75+200+100+25)+2000-(75+200+100+25+200)= 3.0ns Path 2: same slacks, just different FFs 3.0ns So: weird but better :-) I could of course screw up the results by changing the delay numbers, but they are pretty realistic for current technology. > Option 1) offers extra stages of synchronization between the input >and output, but the 1ns gap between FFs means that metastability is >more likely to propagate. Option 2) waits 2ns for the sample FFs to >make up their mind, vastly decreasing the metastability probability. Yep. > My second question is, does the type of metastability, i.e. the >things in Philip's list, affect which is the better choice? For >instance, if the first FF in the MMC exhibits oscillations in >metastability, then the second FF in the MMC would have several >chances, as its input oscillates, to sample at the 'wrong' time. This >might favour MMC option 2). Actually MMC 2 is favored regardless of oscillations or not because of the 600ps of additional slack time. >If, however, the first FF in the MMC goes >into option 5) metastability, then there's only one chance for the >second FF to sample at the 'wrong' time. This might confer an >advantage on MMC option 1). My thinking on this has always been that the only thing that matters is the resolving time (slack) and the thought experiments about later stages sampling at just the right time to catch the previous FF resolving only cloud the issue. I am not as confident on this issue as I am on others though. What I am confident on though, is that there is a better MMC than your two, and it follows on from MMC #2. Just use 2 FFs, and clock them every 4 ns: (that is, enable them every 4th clock cycle. This would mean though that unlike MMC #2, which runs 2 parallel paths and avoids some latency, this could have upto 4 ns of extra latency, if you just miss the input change) slack from FF1 to output: 4000-(75+200+100+25) = 3.6ns If the latency is really a problem, you could build on the MMC #2 design and have 4 paths each out of phase by 1 clock cycle. Since the path is now only 2 FFs long you would have to have 4 output FFs, and the selector mux would be after these 4 FFs. On the bright side, the mux delay does not eat into the resolving slack time, but it would eat some of the available cycle time in the logic that follows the output FF. > > Anyway, I'm still thinking about this. I think the clock frequency >may decide which is better for a given FF type. Any comments? > Cheers, Syms. Thanks for an interesting question. Comments above :-) Philip Freidin Philip Freidin FliptronicsArticle: 59912
"Paul Baxter" <pauljnospambaxter@hotnospammail.com> wrote in message news:3f524aa7$0$254$cc9e4d1f@news.dial.pipex.com... > At the risk of starting a toolset discussion, I would recommend > www.aldec.com and their ActiveHDL product. > > Its mainly a very user friendly simulator (still not perfect, but a lot > friendlier than Modelsim Can you explain a bit more please? I've heard many people say that Aldec is more "user-friendly", but personally (strictly personally!) I've always found its project system and its obsessive copying of files from place to place to be thoroughly confusing. It makes life very easy for you if you have just an HDL design and a single testbench file and no other tools interested in those source files, but as soon as I try to do anything more complicated I get hopelessly mired in its bizarre project system. (By the way, I always detested the Aldec project machine in the older Xilinx tools!). Once again, for emphasis: this is my personal opinion only; we're happy to support any simulators for our training courses and other work; and my experience with Aldec is fairly limited, whereas I use ModelSim for the majority of my day-to-day HDL work. So I'd be delighted to hear from any Aldec champions out there. -- Jonathan Bromley, Consultant DOULOS - Developing Design Know-how VHDL * Verilog * SystemC * Perl * Tcl/Tk * Verification * Project Services Doulos Ltd. Church Hatch, 22 Market Place, Ringwood, Hampshire, BH24 1AW, UK Tel: +44 (0)1425 471223 mail: jonathan.bromley@doulos.com Fax: +44 (0)1425 471573 Web: http://www.doulos.com The contents of this message may contain personal views which are not the views of Doulos Ltd., unless specifically stated.Article: 59913
"Skull-Lee" <Skull-Lee@tuks.co.za> wrote in message news:1062408392.909742@nntp.up.ac.za... > I need to comunicate with a DSP in serie. > Is there a way that any one know of? What DSP? Most of the Texas DSP processors have various different serial interfaces, all of which are fully documented in the data sheets. I don't know about other manufacturers' offerings, but I suspect something similar is available. DSP chips commonly use serial interfaces to communicate with slow peripherals such as audio DACs. -- Jonathan Bromley, Consultant DOULOS - Developing Design Know-how VHDL * Verilog * SystemC * Perl * Tcl/Tk * Verification * Project Services Doulos Ltd. Church Hatch, 22 Market Place, Ringwood, Hampshire, BH24 1AW, UK Tel: +44 (0)1425 471223 mail: jonathan.bromley@doulos.com Fax: +44 (0)1425 471573 Web: http://www.doulos.com The contents of this message may contain personal views which are not the views of Doulos Ltd., unless specifically stated.Article: 59914
Hi there! If I e.g. place a RAM component into MultiSim, will it simulate it? I.e. can I (using switches and hex displays) write on the RAM and then read back? Is the IC really simulated, or it's just a graphic symbol? If so, is there any other program that will do it? Thanks! -- MikeArticle: 59915
Now I find that if the DMA's dest address is the SDRAM, It will same as above. But if is the Single port ram(SRAM), this DMA moudle worked as expected. ???????????? algous2002@yahoo.com.cn (algous) wrote in message news:<1e71fcd5.0308270040.48c88c05@posting.google.com>... > A Master burst module want to write something via the > PLD-to-STRIP bridge. 14 beat a burst. > first the slavehready signal is ready. since begin the > translation, the slavehready signal goes down, and will > ever being ready untill reboot the system. > os is linux-2.4.19. > > WHY?Article: 59916
Hi all, I'm using a VIIpro (P30, -5). My data needs to be buffered before going to the DDR memory. The DDR databus width is 96bit (6 devices), which generates a 192bit internal databus.. Target clock rate is 166MHz. Write burst data needs to be supplied out of the buffer, so logically I nee a 192bit wide buffer... But does anyone has any faith in the solution of reading a blockram at 333MHz & doubling the datawith in register ? I tried it with a very simple design, and I get almost there... here's the timing report... Timing Report ************************************************************************************************ Timing constraint: TS_clkrd2x = PERIOD TIMEGRP "clkrd2x" 3.003 nS HIGH 50.000000 % ; 150 items analyzed, 2 timing errors detected. (2 setup errors, 0 hold errors) Minimum period is 3.017ns. -------------------------------------------------------------------------------- Slack: -0.014ns (requirement - (data path - clock skew)) Source: ram0/B5.B (RAM) Destination: ram0/BU42 (FF) Requirement: 3.003ns Data Path Delay: 3.017ns (Levels of Logic = 0) Clock Skew: 0.000ns Source Clock: clkrd2x_bufgp rising at 0.000ns Destination Clock: clkrd2x_bufgp rising at 3.003ns Timing Improvement Wizard Data Path: ram0/B5.B to ram0/BU42 Location Delay type Delay(ns) Physical Resource Logical Resource(s) ------------------------------------------------- ------------------- RAMB16_X5Y19.DOB17 Tbcko 1.680 ram0/B5 ram0/B5.B SLICE_X65Y158.BX net (fanout=1) 1.074 ram0/N849 SLICE_X65Y158.CLK Tdick 0.263 dram<17> ram0/BU42 ------------------------------------------------- --------------------------- Total 3.017ns (1.943ns logic, 1.074ns route) (64.4% logic, 35.6% route) **************************************************************************************************** So the blockram is fast enough, but the routing around it cannot handle it quite so good. This exercise was made with only 1 blockram in an empty V2P30. I would need 12 blockrams at 333MHz if I go for it.. Otherwise, I'll go for the 24BRAM's at 166MHz. I'll synthesise the 333MHz clock using a DCM, could the phase shift save me some picoseconds somewhere ? So : question ... 1. Anyone has any experience running BRAM at such a speed ? 2. Anyone has any tips, recommendations to get it some faster ? 3. Anyone has any other ideas to get my thing happen... thanks y'all for your time. ~ioloArticle: 59917
The exponential weighted averaging cannot be used because all data into the window have to be treated equally as all have the same importance. If using a LPF and n is the number of samples in the window then if you want to have an average of the last 100 values received then your filter has to be 100 tap long. Correct? I am asking just in case I have not understood fully how digital filters are implemented. The truth is that I have never done it, just read about it. Christos "Hal Murray" <hmurray@suespammers.org> wrote in message news:vl1tm7725v2310@corp.supernews.com... > [exponential weighted averaging] > > > a_k = (1/(n+1))*s_k + (n/(n+1))*a_k-1 > > >As you can see this is quite easy to implement, requiring to > >multiplies, one addition, and one register for a_k-1 storage. If you > >choose n+1 do be a power of two then one of the multiplications (or I > >guess it is a divide) becomes a simple shift operation. > > The other multiply/divide turns into a shift and subtract. > > -- > The suespammers.org mail server is located in California. So are all my > other mailboxes. Please do not send unsolicited bulk e-mail or unsolicited > commercial e-mail to my suespammers.org address or any of my other addresses. > These are my opinions, not necessarily my employer's. I hate spam. >Article: 59918
> If you think there is no uncertanty in measuring the spin of an > electron, you need to go back to school. Rick, I din't say that. I said electron spin does not presents metastability. Anyway, did you hear about spintronics? Maybe the scientists behind this idea may go back to school too! Look at http://www.eetimes.com/story/OEG20001221S0035 Cut and paste please, I don't know how to include hypertexts. I think that shows my whole point of view: if you believe, insist! Unhappily (for me) I could not find the text about the electron position. Luiz CarlosArticle: 59919
Thanks and Regards!Article: 59920
Hello, I have a custom-made FPGA-board with a Xilinx Virtex-E (XCV-300E) FPGA. I'm trying to measure the FPGA's power consumption over a sensor resistor betwenn the FGPA's ground pins and the board's ground. The trace of the current has peaks, which occur at a rate of approximately 700 kHz (independent whether the FPGA is configured or not). I was wondering if those peaks may result from the refreshing of the FPGA's BlockRAM cells. Is that possible and if the BlockRAM cells' refreshing is causing the current peaks, is there a way to deactivate refreshing (as I'm not using the RAM's) with the Xilinx WebPack (ISE 5)? Best regards, Stefan TillichArticle: 59921
Hi Kload! > Lets assume I'm using a Xilinx Virtex device and I have a VHDL design > that includes the following > > a<=b+c; > > Will the design tools (I happen to be using Foundation 2.1i) infer a > "simple" adder or will the tools automatically infer an adder that uses > the dedicated carry look ahead logic?? With no synthesis costraints: carry-ribble-adder, because it's the smallest. With speed-constraints: Depending on the synthesis tool and target library. Often Carry-Lookahead. With something like "synthesis pragmas" (supported by Synopsys) you can manually choose the type of adder. RalfArticle: 59922
Andrew - Thanks for the reply. The Burton/Dike 1999 paper on Miller Effect sounds interesting; I'll try to get a copy. Bob Perlman On Sun, 31 Aug 2003 22:31:39 -0500, Andrew Paule <lsboogy@qwest.net> wrote: >Hi Bob: > >guess I'm flattered that Hal is getting to defend some of the aspects >here - what I am pointing out is that in a standard flop design, there >are two balanced P/N nodes (assuming you're building a MOS flop - very >few done otherwise anymore) that are used to jam another pair that sets >the output. The balancing that takes place will only force the output >if there is thermal noise - this is how Fiarchild, Intel, and most >others build the things. Take a poke around the net for articles by >Dike and Burton - they have done more work on metastability that almost >anyone else out there, and IEEE uses their work as a standard for this >stuff. I'm not the author, just parroting what others have done. > >The amount of noise is not a factor (this is thermal noise, and all >gates exhibit about 9nV per root kT) - thanks boltzman - but the effects >of Miller coupling is as of yet not well understood, at least by me. > >Andrew > >Bob Perlman wrote: > >>On Sun, 31 Aug 2003 10:55:19 -0400, rickman <spamgoeshere4@yahoo.com> >>wrote: >> >> >> >>>Bob Perlman wrote: >>> >>> >>>>On Sat, 30 Aug 2003 23:15:18 -0400, rickman <spamgoeshere4@yahoo.com> >>>>wrote: >>>> >>>> >>>> >>>>>Hal Murray wrote: >>>>> >>>>> >>>>>>>this has nothing to do with quantization, until you get into QED, but is >>>>>>>a matter of statistical thermal noise on two cells that are used to jam >>>>>>>the outputs of a flop. You need the noise, but that has nothing to do >>>>>>>with undergrad quantum mechanics. Read Peter's stuff - he's quite good >>>>>>>and knowledgable. >>>>>>> >>>>>>> >>>>>>Do I need noise? Why? I thought the normal exponential decay >>>>>>was well modeled (Spice?) without noise. Perhaps you need >>>>>>it if the FF is "perfectly" ballanced but that has a vanishingly >>>>>>small probability in the real world. >>>>>> >>>>>> >>>>>I think you are right. There is only one point on a continuous range >>>>>that will be perfectly balanced. The probability of that is in essence >>>>>the inverse of infinity which I don't know that it even has meaning. >>>>> >>>>>If you require noise to shift you out of metastability, then the people >>>>>who argue that more noise will get you out quicker could then be right. >>>>> >>>>> >>>>A metastable failure doesn't require that you land exactly on the >>>>balance point. There may be only one point that keeps you in the >>>>metastable state forever, but there's a range of points that will >>>>delay FF settling long enough to make your design fail. The more time >>>>you give the design to settle, the shorter that range of points is. >>>> >>>>Accordingly, noise doesn't have to kick the FF to that perfect balance >>>>point. It need only force you close enough that the FF output >>>>transition is sufficiently delayed to hose over the circuit. >>>> >>>> >>>I don't think you understand the point. We are not saying that balance >>>or noise are required to demonstrate metastability. It was pointed out >>>that in a simulation of the effect, something would be needed to move >>>the FF off the balance point and noise was suggested. But in the real >>>world the "balance point" is so vanishing small, it would never actually >>>happen. That is not saying that the FF can not go metastable without >>>being balanced. >>> >>> >> >>Whoever said, "If you require noise to shift you out of metastability, >>then the people who argue that more noise will get you out quicker >>could then be right," could you explain further? Are you saying that >>noise is required to resolve the metastable state, or is this a >>counter-argument to the "noise may get you out faster" claim? Or is >>it something else entirely? >> >>Bob Perlman >>Cambrian Design Works >> >> >>Article: 59923
On Mon, 1 Sep 2003 20:52:37 +0800, "John" <305liuzg@163.net> wrote: >Thanks and Regards! > > There is a downloadable port on the UCOSII site. UCOSII is a robust (compared to other free things on the net) RTOS which you can obtain at low cost for development (buy the book) but you must pay the license fees when it's time to ship your product.Article: 59924
christos.zamantzas@cern.ch wrote: > The final system will have to keep 10 moving sums with the largest being > 250,000 (8-bit) values for each of the 16 independent input channels. and >the sample rate is slow enough: 25 KHz This is easy todo, even without the CIC Filters suggested by Ray Andraka: In external memory you keep a circular buffer of 16x250000 samples. You keep your 160 Sums inside of your FPGA. To update them you do the following: For each channel input the new sample X write new sample to its external ram location in a circular buffer for each moving sum of this channel read the value Y that "falls of" the sum from external ram add X-Y to the moving sum inside the FPGA. This requires 16 writes and 160 reads to external memory with a resulting bandwidth of 4.400.000 memory accesses per seconds. If the values are stored in memory with the right alignment you can do 4 accesses in parallel reducing the bandwidth to 1.100.000 accesses per second. Maybe you should instantiate a processor in you fpga and use that to implement this. OT: What are you doing at LHC that has a sample rate of 25kHZ? Kolja Sulimma Frankfurt What are you
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z