Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
"novice" <slimanerechoum@ieee.org> wrote in message news:ee8b44e.14@webx.sUN8CHnE... > I would use a 32-bit accumulator (or a dds) clocked at 24MHz with a constant input of 1.8432*2^32/24. The msb of the output is your clock. Anyone please correct me if I'm wrong. > > dsp novice. First, you're correct. 1.8432*2^32/24 into a 32-bit accumulator works fine. A straight divide-by-13 is close as well. If you want the exact average frequency, you can also have a smaller implementation with more deterministic performance (when you have a 13-clock cycle versus when you have a 14-clock cycle) by implementing a 48/625 DDS instead of a 329853488/2^32 DDS. Rather than using a 32-bit accumulator, use a 10-bit accumulator and add 48 on most cycles but when the accumulator overflows, add 48+1024-625=447 for the single cycle instead. Coded properly, the only extra resources needed in most FPGAs is the overflow detect; the two constants are hard-coded and selected with a single bit within the accumulator logic. Closer to 50% duty cycle with the MSbit of the accumulator can be achieved with other tricks.Article: 80501
usrdr@yahoo.co.uk wrote: > > Also what do you recommend for debugging? The grey matter between your ears. -- [100~Plax]sb16i0A2172656B63616820636420726568746F6E61207473754A[dZ1!=b]salaxArticle: 80502
Peter Sommerfeld wrote: > I was amazed by this press release: > > http://www.epson.co.jp/e/newsroom/2005/news_2005_02_09.htm > > Apparently the asynchronous design uses 70% less power compared to its > synchronous counterpart. > > What do you think they mean by an asynchronous processor? I find it > hard to believe the majority of the circuitry (pipeline, etc) is > asynchronous. > > Will asynch design ever be feasible in FPGAs? I suppose it would > require a new tool chain and new ways of thinking, but having never > done it, I have no idea. > > Just something that's got me really curious. It's also got to be the world's largest 8 bit uC :) The beauty of Async logic is it self adjusts to Vcc/Temp/Process, so for what Epson are trying to target, it is quite a good choice. They can avoid any regulation, and the hot spot that would create. The 70% claim is a 'best case' one, and relates to Async vs Vanilla Sync design. [See an earlier thread here on Async ]. Std Sync Cmos has to have margins for Vcc, Temp, and Process so it is not uncomon to find you can 'clock on the bench' much faster than the design speed. If you remove the older 'square box' design, and start to vary Vcc, Clock, and measure chip core Temp to determine where the two need to be, you can get closer to the Async powers. This is what Intel/AMD routinely do now, on their Mobile processors. Typical SMPS chips have 4/5/6 bit DAC inputs, allowing the CPU to request it's own Vcc level. PLLs and clock generators are now routine. - ie you now have std devices to vary CLK, Vcc, and measure Temp. In a FPGA, if you wanted to add a process track capability, you could use std Sync tools, and make deliberate test cell(s), that fail first. You then vary Vcc/Clk until those cells just fail, knowing that the margin in the rest of the design keeps the system operating. Sailing close to the wind, of course, but it would maximise battery life. -jgArticle: 80503
The basic structure of most FPGAs ( with an abundance of flip-flops) favors one-hot encoding. The alleged problem of many illegal states can easily be alleviated since it is so easy to detect all illegal states, whereas there are no illegal states in binary encoding. So, in one-hot you know at least that something went wrong... Gray is not a real option, since Gray coding only makes sense when the code sequence is linear, like in a counter. Once you can jump from one state to either one of several, Gray coding has lost its meaning. Just my $0.02 Peter AlfkeArticle: 80504
Stephen Lannard wrote: > >>Indeed there are examples, but i don't find them too helpful. They kind of > >>assume you have an idea of what you're doing and at the moment i don't. > >>:( > >> > >>I'm new to this and maybe i'm expecting too much of the 22v10. > > > > Depends on your likely range of X..... > > 12ms will need 12 bits @ 1mhz i think. > I'll stick with a one shot, seems easier. I'm pretty sure you need 14 bits to get to 12 milliseconds. This means 14 macrocells in addition to the one dividing the input clock. If you want an exercise in partitioning logic, you can do it in two 22V10's. > > StephenArticle: 80505
All, I promised I'd get back with a scope picture on S2 power on surge on Vccint. Since I don't get to post graphics here, I will have to ask those interested to get with their Xilinx FAE to get a scope shot. If you email me directly, I can send it also (but I might be spammed to death buy the requests .... but I will honor them if they are not too many). Basically, the 2S60 (that I tested) has between a 3/4 ampere, and three ampere surge for a millisecond or less with a 20 ms ramp on time. Power supply sequence doesn't matter. Temperature at cold is worse than temperature at hot, but I have but one part, so I have no idea how that holds of process. No surges on any other supply. The leakage follows the spreadsheet, that is there is a lot of Iccint(leak) at hot (about one ampere at 70C). In fact, by specifying the "turn-on" current required for hot, they are probably able to ignore the 3 amperes at cold (eg -- if the start up current is equal to or less than the surge, then they can be honest and claim there is no surge). I will accept that I may have an early version of silicon, and that Paul L. correctly stated that what I have seen is fixed in future tapeouts/silicon, but that remains to be seen as well. Perhaps someone with a production device can confirm this is fixed? AustinArticle: 80506
"Current at cold is worse than current at hot" -- for the surge. Spelled correctly, making nonsense. Apologize for the minor goof. Austin Austin Lesea wrote: > All, > > I promised I'd get back with a scope picture on S2 power on surge on > Vccint. > > Since I don't get to post graphics here, I will have to ask those > interested to get with their Xilinx FAE to get a scope shot. > > If you email me directly, I can send it also (but I might be spammed to > death buy the requests .... but I will honor them if they are not too > many). > > Basically, the 2S60 (that I tested) has between a 3/4 ampere, and three > ampere surge for a millisecond or less with a 20 ms ramp on time. > > Power supply sequence doesn't matter. Temperature at cold is worse than > temperature at hot, but I have but one part, so I have no idea how that > holds of process. > > No surges on any other supply. > > The leakage follows the spreadsheet, that is there is a lot of > Iccint(leak) at hot (about one ampere at 70C). In fact, by specifying > the "turn-on" current required for hot, they are probably able to ignore > the 3 amperes at cold (eg -- if the start up current is equal to or less > than the surge, then they can be honest and claim there is no surge). > > I will accept that I may have an early version of silicon, and that Paul > L. correctly stated that what I have seen is fixed in future > tapeouts/silicon, but that remains to be seen as well. Perhaps someone > with a production device can confirm this is fixed? > > AustinArticle: 80507
Peter Sommerfeld wrote: > I was amazed by this press release: > > http://www.epson.co.jp/e/newsroom/2005/news_2005_02_09.htm > > Apparently the asynchronous design uses 70% less power compared to its > synchronous counterpart. > > What do you think they mean by an asynchronous processor? I find it > hard to believe the majority of the circuitry (pipeline, etc) is > asynchronous. > Hi Peter, A modern processor is'n synchronous at all! It would be very hard to get a global clock all over the system (IMHO) As far as I know is asynchronous design very well suitable for cryptographic processors, as it is very tough to use DPA or other methods to analyze it. Nevertheless it is as far as I know in an experimental phase in workgroups at universities and large companies (e.g. INfineon). I am not an expert in such things. Just mentioned what I heard. Cheers Reinhold > Will asynch design ever be feasible in FPGAs? I suppose it would > require a new tool chain and new ways of thinking, but having never > done it, I have no idea. > > Just something that's got me really curious. > > -- Pete >Article: 80508
On Mon, 07 Mar 2005 20:33:53 +0100, Reinhold Schmidt <rschmidt@aon.at> wrote: >Peter Sommerfeld wrote: >> I was amazed by this press release: >> >> http://www.epson.co.jp/e/newsroom/2005/news_2005_02_09.htm >> >> Apparently the asynchronous design uses 70% less power compared to its >> synchronous counterpart. >> >> What do you think they mean by an asynchronous processor? I find it >> hard to believe the majority of the circuitry (pipeline, etc) is >> asynchronous. >> > >Hi Peter, > >A modern processor is'n synchronous at all! It would be very hard to get >a global clock all over the system (IMHO) > Your second comment is very true; but it doesn't support your first statement. You should check some of Intel's clock distribution papers in ISSCC and JSSCC. You would be amazed the lenghts they go to keep their systems synchronous. It turns out designing a OO, 20 stage pipeline 4GHz x86 processor is more difficult with an asynchronous design methodology.Article: 80509
Hi, In a hierarchical synchronous design methodology for FPGAs, is it reqiured to register the outputs of all the modules in a hierarchy (starting from the bottom) right upto the top module, or, is it sufficient to register the outputs of the leaf hierarchical module (the bottom-most module)? Thanks MORPHEUSArticle: 80510
Gabor wrote: > Stephen Lannard wrote: > >>>>Indeed there are examples, but i don't find them too helpful. They > > kind of > >>>>assume you have an idea of what you're doing and at the moment i > > don't. > >>>>:( >>>> >>>>I'm new to this and maybe i'm expecting too much of the 22v10. >>> >>>Depends on your likely range of X..... >> >>12ms will need 12 bits @ 1mhz i think. >>I'll stick with a one shot, seems easier. > > > I'm pretty sure you need 14 bits to get to 12 milliseconds. This means > 14 macrocells in addition to the one dividing the input clock. If you > want an exercise in partitioning logic, you can do it in two 22V10's. 14 bits will count up to 16.384 ms, but you need to add one MCell for the output pulse JK, to get counter -> monostable action. => 15 MCells Would suit 2 x 22V10, or 1 x ATF750 (same package) -jgArticle: 80511
Peter Alfke wrote: > The basic structure of most FPGAs ( with an abundance of flip-flops) > favors one-hot encoding. > The alleged problem of many illegal states can easily be alleviated > since it is so easy to detect all illegal states, whereas there are no > illegal states in binary encoding. So, in one-hot you know at least > that something went wrong... > Gray is not a real option, since Gray coding only makes sense when the > code sequence is linear, like in a counter. Once you can jump from one > state to either one of several, Gray coding has lost its meaning. This is broadly correct, but I have coded Gray State engines, where it is not linear, but there are a very small number of branch choices. That is because there are a number of Gray solutions, and you have the freedom to map state-gray. Too many branches, and it quickly becomes impossible to keep one-bit. Also, if you are register constrained (CPLDs), you can code in a mix of Binary and One-Hot/Gray. Use Binary for the 'major' branches, and Gray for the 'minor' branches. -jgArticle: 80512
Austin Lesea wrote: > All, > > I promised I'd get back with a scope picture on S2 power on surge on > Vccint. > > Since I don't get to post graphics here, I will have to ask those > interested to get with their Xilinx FAE to get a scope shot. > > If you email me directly, I can send it also (but I might be spammed to > death buy the requests .... but I will honor them if they are not too > many). You could give a web link ? [less work for everyone ?] -jgArticle: 80513
Jim, I am sure I can do that, but I have to go through marketing. That will take awhile. And then, marketing will want to put there imprimatur on it. I'd prefer to keep it in this forum to give Paula chance to reply (as I said, I may have early silicon, whose masks may have been modified to fix this behavior). Austin Jim Granville wrote: > Austin Lesea wrote: > >> All, >> >> I promised I'd get back with a scope picture on S2 power on surge on >> Vccint. >> >> Since I don't get to post graphics here, I will have to ask those >> interested to get with their Xilinx FAE to get a scope shot. >> >> If you email me directly, I can send it also (but I might be spammed >> to death buy the requests .... but I will honor them if they are not >> too many). > > > You could give a web link ? [less work for everyone ?] > > -jg >Article: 80514
"Peter Sommerfeld" <psommerfeld@gmail.com> wrote in message news:1110212334.327016.266460@o13g2000cwo.googlegroups.com... >I was amazed by this press release: > > http://www.epson.co.jp/e/newsroom/2005/news_2005_02_09.htm > > Apparently the asynchronous design uses 70% less power compared to its > synchronous counterpart. > > What do you think they mean by an asynchronous processor? I find it > hard to believe the majority of the circuitry (pipeline, etc) is > asynchronous. > > Will asynch design ever be feasible in FPGAs? I suppose it would > require a new tool chain and new ways of thinking, but having never > done it, I have no idea. > > Just something that's got me really curious. ARM is working on one. I think it is based on the Amulet asynchronous processor developed at Manchester University, as that had an ARM architecture. LeonArticle: 80515
morpheus wrote: > In a hierarchical synchronous design methodology for FPGAs, is it > reqiured to register the outputs of all the modules in a hierarchy > (starting from the bottom) right upto the top module, or, is it > sufficient to register the outputs of the leaf hierarchical module (the > bottom-most module)? The only "requirement" is to meet static timing. The rest is style. I prefer registers on all module outputs as this is how the standard HDL synchronous templates work. I also prefer to minimize hierarchy. -- Mike TreselerArticle: 80516
Well, I think there is a difference here. On the newsgroup we can say: "Altera Stratix-2 still has the infamous start-up current, even >2A on a 2S60, and we have measurements to prove it". But I am not too excited about honoring them with a Xilinx website, only to have Altera then come back and claim that they finally "have REALLY" fixed it. Gentlemen should have some constraint in washing each other's dirty laundry in public, or call each other "liar" in public, even when it would be justified, as it is in this case. I prefer the attitude that "I am #1, and certain things are below my dignity. Let the other guy crawl around in the mud, if he enjoys that environment". But the newsgroup is like a club, where we can be more outspoken and candid... Peter AlfkeArticle: 80517
Peter Alfke wrote: > Jeremy,neither of us is a fan of clock gating, but I like to explore > the limits of "synchronous design". Me too :) When things get difficult, or resources are low, then these sorts of tricks can make all the difference, if well understood and controlled (If not, then they can be a source of real problems). > I disagree with your analysis of the need for close delay matching. > > Let's take the rising edge case with the OR gate. It only requires that > the path from clock to Q and to the OR gate has a longer delay than the > direct connection of the clock to OR gate. How much longer does not > matter, until it approachess the clock High time. > I think that's a reasonable assumption, especially since no designer > would intentionally delay the clock signal that is to be gated. > I had warned against using this trick for very high-frequency clocks. I think we actually agree - I'm just being a bit more paranoid :) Routing delay can add a variable amount of time (per synth run) - by using appropriate constraints the operation can be guaranteed to either fail PAR, or work, yes? This would imply that this technique could possibly be used for mid-to-high frequency clocks (if the technology is fast enough), if appropriately constrained. The designer may not intend for the clock to be delayed (irrelevant if using BUFGs), but there are situations where the clock could be delayed, if the problem is not well understood enough (local routing anyone?). I think that when we hit the 'limits', what it actually means is that more of the checking either falls strictly on the designer, or more care is needed in order to tell the tools what checking they need to do. JeremyArticle: 80518
Has anybody the gcc/binutils sources delivered with the Linux version of NIOS2 1.1? I have a ticket open with Altera since February 4th and they are just stuck with the compiler rebuild problem on UNIX hosts when I use the toolchain sources from my NIOS2 Windows version. Also never got any feedback why they removed the gcc/binutils sources from their ftp with the "doc" user login mentioned once on their website... thanx in advance rickArticle: 80519
By scoping the sync stripper (EL 4511) H sync seems to be jitter free. This Hysnc is given to the PLL which generates a 27 Mhz clock that is locked to the Hsync. This clock is fed into the FPGA which does the Subcarrier locking. The 27 MHz coming into the FPGA is upscaled to 54 MHz..during the rising edge of this 54 MHz clock the subarrier is locked to the reference input signal (composite video). Do u think this helps the subcarrier lock to the 54Mhz clock? The PLL chip I am using is a MK2069. BTW whats the technique to measure the jitter. Also the 27 MHz clock input to the FPGA is not a proper square wave...and as far i know this could be reason for jitter aswell. What do u have to say on this? The input analog video composite signal is converted into a 12-bit digital before giving to the FPGA (for locking the subcarrier). The output from the FPGA is given to the DAC which gives the analog black burst. Do u think since both these DAC and ADCs,which are operated with clock (not exactly square waves) will have anything to do with the jitter? Thanx again...Really appreciate ur helpArticle: 80520
morpheus wrote: > Hi, > In a hierarchical synchronous design methodology for FPGAs, is it > reqiured to register the outputs of all the modules in a hierarchy > (starting from the bottom) right upto the top module, or, is it > sufficient to register the outputs of the leaf hierarchical module (the > bottom-most module)? > Thanks > MORPHEUS > The main motive behing making the IOs of a module registered is to make the timing less dependent upon the internals of the module. So if you are making a module to interface with someone else's design, or you want to be able to floorplan the module without worrying as much about the routing delay between modules, then it's good to register the IOs. It's not a requirement, though, and having nonregistered outputs doesn't necessarily make the design asynchronous.Article: 80521
Arash Salarian wrote: > I think "generally" it is a bad idea for a designer to try to do the > state-assignment by hand and would recommend using symbolic names for the > states and let the synthesize tool do the "dirty" job. Why? First, this is a > difficult job and for large state-machines can easily get tedious and > potentially produce bugs. Also, what if you want to port your VHDL code from > say, Altera to Xilinx or even ASIC? Optimal state assignment can be > different on different architectures. However, the problem itself is very > interesting and very IMPORTANT for the synthesize tool designers... I agree that designing the state-transition should be a seperate task from the state encoding, and that a good engineer shall focus on the the former. However, a good design enigneer shall take in considerations of the impacts of different encoding scheme (area, speed, resource use, safety, etc) to his/her fsm design. More synthesizers have automatic fsm exploration built-in, and defaults to one or another encoding. It is the design engineer's responsibility to know such settings (for example synplicity defaults to 1-hot), otherwise you will have no idea what exactly you are designing. I final suggestion is to design for portability and maintainability, while being aware of the subtle differences among the synthesizers.Article: 80522
Hi Austin, As I previously indicated, the EP2S60 *ES* (Engineering Sample) does exhibit a surge current. This current does not exist in any other shipping SII device including the EP2S15, EP2S30, EP2S90ES, EP2S130, and EP2S180. The production EP2S60 devices, shipping later this month, also do not exhibit a surge. The surge current issue was reflected in the Early Power Estimator 2.0. Since it was fixed, it was removed in EPE 2.1. We just realized that the errata sheet for the ES device is missing this spec; this will be rectified shortly. > The leakage follows the spreadsheet, that is there is a lot of > Iccint(leak) at hot (about one ampere at 70C). In fact, by specifying > the "turn-on" current required for hot, they are probably able to ignore > the 3 amperes at cold (eg -- if the start up current is equal to or less > than the surge, then they can be honest and claim there is no surge). The production start-up current is less than the operating (static) current across all temperatures. Incidentally, our ES devices exhibit higher-than-typical static currents. I noticed in the screenshot of our chip in your power seminar that you were measuring an ES device, a fact you decided not to mention in the talk... > Perhaps someone > with a production device can confirm this is fixed? Austin, we have measured a bunch of production devices across process corners, voltages, temperatures, different ramp rates, different power supply start-up conditions, and on different days of the week for good measure. There is no contention-based start-up current. Please, question our marketing all you want. But our specs are our specs. Do not accuse us of lying. Paul Leventis Altera Corp.Article: 80523
Peter Alfke wrote: > The basic structure of most FPGAs ( with an abundance of flip-flops) > favors one-hot encoding. > The alleged problem of many illegal states can easily be alleviated > since it is so easy to detect all illegal states, whereas there are no > illegal states in binary encoding. So, in one-hot you know at least > that something went wrong... That's the entirely true. Binary encoding also have illegal states if the number of states is not power of 2. Furthermore, you don't always know that something is wrong with 1-hot encoding, e.g. two bits get flipped with one being the "hot" bit. Having said that, I still agree with you that one-hot is very safe and fast.Article: 80524
Peter Alfke wrote: > The basic structure of most FPGAs ( with an abundance of flip-flops) > favors one-hot encoding. You have to try it both ways on each design to be sure. I have found that one-hot encoding usually improves speed by about 3% but not always. It can be slower in some cases. One-hot always costs something in device utilization. However, I have never seen the choice of encoding make or break the design fit or timing. -- Mike Treseler Here's a recent benchmark: ------------------------------------------------------------------------------- begin -- ___Relative Synthesis Performance virtex2 template_standard; -- 180 MHz 50 FF 91 LUTS encoding=auto OK -- 194 MHz 47 FF 84 LUTS encoding=bin MIN LUTS ------------------------------------------------------------------------------- -- template_s_reset; -- 222 MHz 50 FF 102 LUTS encoding=auto FAST -- 181 MHz 47 FF 91 LUTS encoding=bin SMALL ------------------------------------------------------------------------------- -- template_trick; -- 239 MHz 49 FF 92 LUTS encoding=auto FASTEST -- 184 MHz 46 FF 86 LUTS encoding=bin MIN FLOPS
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z